Colagiorgio, P; Romano, F; Sardi, F; Moraschini, M; Sozzi, A; Bejor, M; Ricevuti, G; Buizza, A; Ramat, S
2014-01-01
The problem of a correct fall risk assessment is becoming more and more critical with the ageing of the population. In spite of the available approaches allowing a quantitative analysis of the human movement control system's performance, the clinical assessment and diagnostic approach to fall risk assessment still relies mostly on non-quantitative exams, such as clinical scales. This work documents our current effort to develop a novel method to assess balance control abilities through a system implementing an automatic evaluation of exercises drawn from balance assessment scales. Our aim is to overcome the classical limits characterizing these scales i.e. limited granularity and inter-/intra-examiner reliability, to obtain objective scores and more detailed information allowing to predict fall risk. We used Microsoft Kinect to record subjects' movements while performing challenging exercises drawn from clinical balance scales. We then computed a set of parameters quantifying the execution of the exercises and fed them to a supervised classifier to perform a classification based on the clinical score. We obtained a good accuracy (~82%) and especially a high sensitivity (~83%).
ERIC Educational Resources Information Center
Arendasy, Martin; Sommer, Markus
2007-01-01
This article deals with the investigation of the psychometric quality and constructs validity of algebra word problems generated by means of a schema-based version of the automatic min-max approach. Based on review of the research literature in algebra word problem solving and automatic item generation this new approach is introduced as a…
Raineri, M; Traina, M; Rotolo, A; Candela, B; Lombardo, R M; Raineri, A A
1993-05-01
Thallium-201 scintigraphy is a widely used noninvasive procedure for the detection and prognostic assessment of patients with suspected or proven coronary artery disease. Thallium uptake can be evaluated by a visual analysis or by a quantitative interpretation. Quantitative scintigraphy enhances disease detection in individual coronary arteries, provides a more precise estimate of the amount of ischemic myocardium, distinguishing scar from hypoperfused tissue. Due to the great deal of data, analysis, interpretation and comparison of thallium uptake can be very complex. We designed a computer-based system for the interpretation of quantitative thallium-201 scintigraphy data uptake. We used a database (DataEase 4.2-DataEase Italia). Our software has the following functions: data storage; calculation; conversion of numerical data into different definitions classifying myocardial perfusion; uptake data comparison; automatic conclusion; comparison of different scintigrams for the same patient. Our software is made up by 4 sections: numeric analysis, descriptive analysis, automatic conclusion, clinical remarks. We introduced in the computer system appropriate information, "logical paths", that use the "IF ... THEN" rules. The software executes these rules in order to analyze the myocardial regions in the 3 phases of scintigraphic analysis (stress, redistribution, re-injection), in the 3 projections (LAO 45 degrees, LAT,ANT), considering our uptake cutoff, obtaining, finally, the automatic conclusions. For these reasons, our computer-based system could be considered a real "expert system".
Automatic and Objective Assessment of Alternating Tapping Performance in Parkinson's Disease
Memedi, Mevludin; Khan, Taha; Grenholm, Peter; Nyholm, Dag; Westin, Jerker
2013-01-01
This paper presents the development and evaluation of a method for enabling quantitative and automatic scoring of alternating tapping performance of patients with Parkinson's disease (PD). Ten healthy elderly subjects and 95 patients in different clinical stages of PD have utilized a touch-pad handheld computer to perform alternate tapping tests in their home environments. First, a neurologist used a web-based system to visually assess impairments in four tapping dimensions (‘speed’, ‘accuracy’, ‘fatigue’ and ‘arrhythmia’) and a global tapping severity (GTS). Second, tapping signals were processed with time series analysis and statistical methods to derive 24 quantitative parameters. Third, principal component analysis was used to reduce the dimensions of these parameters and to obtain scores for the four dimensions. Finally, a logistic regression classifier was trained using a 10-fold stratified cross-validation to map the reduced parameters to the corresponding visually assessed GTS scores. Results showed that the computed scores correlated well to visually assessed scores and were significantly different across Unified Parkinson's Disease Rating Scale scores of upper limb motor performance. In addition, they had good internal consistency, had good ability to discriminate between healthy elderly and patients in different disease stages, had good sensitivity to treatment interventions and could reflect the natural disease progression over time. In conclusion, the automatic method can be useful to objectively assess the tapping performance of PD patients and can be included in telemedicine tools for remote monitoring of tapping. PMID:24351667
Automatic and objective assessment of alternating tapping performance in Parkinson's disease.
Memedi, Mevludin; Khan, Taha; Grenholm, Peter; Nyholm, Dag; Westin, Jerker
2013-12-09
This paper presents the development and evaluation of a method for enabling quantitative and automatic scoring of alternating tapping performance of patients with Parkinson's disease (PD). Ten healthy elderly subjects and 95 patients in different clinical stages of PD have utilized a touch-pad handheld computer to perform alternate tapping tests in their home environments. First, a neurologist used a web-based system to visually assess impairments in four tapping dimensions ('speed', 'accuracy', 'fatigue' and 'arrhythmia') and a global tapping severity (GTS). Second, tapping signals were processed with time series analysis and statistical methods to derive 24 quantitative parameters. Third, principal component analysis was used to reduce the dimensions of these parameters and to obtain scores for the four dimensions. Finally, a logistic regression classifier was trained using a 10-fold stratified cross-validation to map the reduced parameters to the corresponding visually assessed GTS scores. Results showed that the computed scores correlated well to visually assessed scores and were significantly different across Unified Parkinson's Disease Rating Scale scores of upper limb motor performance. In addition, they had good internal consistency, had good ability to discriminate between healthy elderly and patients in different disease stages, had good sensitivity to treatment interventions and could reflect the natural disease progression over time. In conclusion, the automatic method can be useful to objectively assess the tapping performance of PD patients and can be included in telemedicine tools for remote monitoring of tapping.
Shen, Simon; Syal, Karan; Tao, Nongjian; Wang, Shaopeng
2015-12-01
We present a Single-Cell Motion Characterization System (SiCMoCS) to automatically extract bacterial cell morphological features from microscope images and use those features to automatically classify cell motion for rod shaped motile bacterial cells. In some imaging based studies, bacteria cells need to be attached to the surface for time-lapse observation of cellular processes such as cell membrane-protein interactions and membrane elasticity. These studies often generate large volumes of images. Extracting accurate bacterial cell morphology features from these images is critical for quantitative assessment. Using SiCMoCS, we demonstrated simultaneous and automated motion tracking and classification of hundreds of individual cells in an image sequence of several hundred frames. This is a significant improvement from traditional manual and semi-automated approaches to segmenting bacterial cells based on empirical thresholds, and a first attempt to automatically classify bacterial motion types for motile rod shaped bacterial cells, which enables rapid and quantitative analysis of various types of bacterial motion.
Quantitative evaluation of skeletal muscle defects in second harmonic generation images.
Liu, Wenhua; Raben, Nina; Ralston, Evelyn
2013-02-01
Skeletal muscle pathologies cause irregularities in the normally periodic organization of the myofibrils. Objective grading of muscle morphology is necessary to assess muscle health, compare biopsies, and evaluate treatments and the evolution of disease. To facilitate such quantitation, we have developed a fast, sensitive, automatic imaging analysis software. It detects major and minor morphological changes by combining texture features and Fourier transform (FT) techniques. We apply this tool to second harmonic generation (SHG) images of muscle fibers which visualize the repeating myosin bands. Texture features are then calculated by using a Haralick gray-level cooccurrence matrix in MATLAB. Two scores are retrieved from the texture correlation plot by using FT and curve-fitting methods. The sensitivity of the technique was tested on SHG images of human adult and infant muscle biopsies and of mouse muscle samples. The scores are strongly correlated to muscle fiber condition. We named the software MARS (muscle assessment and rating scores). It is executed automatically and is highly sensitive even to subtle defects. We propose MARS as a powerful and unbiased tool to assess muscle health.
Quantitative evaluation of skeletal muscle defects in second harmonic generation images
NASA Astrophysics Data System (ADS)
Liu, Wenhua; Raben, Nina; Ralston, Evelyn
2013-02-01
Skeletal muscle pathologies cause irregularities in the normally periodic organization of the myofibrils. Objective grading of muscle morphology is necessary to assess muscle health, compare biopsies, and evaluate treatments and the evolution of disease. To facilitate such quantitation, we have developed a fast, sensitive, automatic imaging analysis software. It detects major and minor morphological changes by combining texture features and Fourier transform (FT) techniques. We apply this tool to second harmonic generation (SHG) images of muscle fibers which visualize the repeating myosin bands. Texture features are then calculated by using a Haralick gray-level cooccurrence matrix in MATLAB. Two scores are retrieved from the texture correlation plot by using FT and curve-fitting methods. The sensitivity of the technique was tested on SHG images of human adult and infant muscle biopsies and of mouse muscle samples. The scores are strongly correlated to muscle fiber condition. We named the software MARS (muscle assessment and rating scores). It is executed automatically and is highly sensitive even to subtle defects. We propose MARS as a powerful and unbiased tool to assess muscle health.
[Assessment of skin aging grading based on computer vision].
Li, Lingyu; Xue, Jinxia; He, Xiangqian; Zhang, Sheng; Fan, Chu
2017-06-01
Skin aging is the most intuitive and obvious sign of the human aging processes. Qualitative and quantitative determination of skin aging is of particular importance for the evaluation of human aging and anti-aging treatment effects. To solve the problem of subjectivity of conventional skin aging grading methods, the self-organizing map (SOM) network was used to explore an automatic method for skin aging grading. First, the ventral forearm skin images were obtained by a portable digital microscope and two texture parameters, i.e. , mean width of skin furrows and the number of intersections were extracted by image processing algorithm. Then, the values of texture parameters were taken as inputs of SOM network to train the network. The experimental results showed that the network achieved an overall accuracy of 80.8%, compared with the aging grading results by human graders. The designed method appeared to be rapid and objective, which can be used for quantitative analysis of skin images, and automatic assessment of skin aging grading.
NASA Astrophysics Data System (ADS)
Singla, Neeru; Srivastava, Vishal; Singh Mehta, Dalip
2018-02-01
We report the first fully automated detection of human skin burn injuries in vivo, with the goal of automatic surgical margin assessment based on optical coherence tomography (OCT) images. Our proposed automated procedure entails building a machine-learning-based classifier by extracting quantitative features from normal and burn tissue images recorded by OCT. In this study, 56 samples (28 normal, 28 burned) were imaged by OCT and eight features were extracted. A linear model classifier was trained using 34 samples and 22 samples were used to test the model. Sensitivity of 91.6% and specificity of 90% were obtained. Our results demonstrate the capability of a computer-aided technique for accurately and automatically identifying burn tissue resection margins during surgical treatment.
Hsu, Li-Yueh; Wragg, Andrew; Anderson, Stasia A; Balaban, Robert S; Boehm, Manfred; Arai, Andrew E
2008-02-01
This study presents computerized automatic image analysis for quantitatively evaluating dynamic contrast-enhanced MRI in an ischemic rat hindlimb model. MRI at 7 T was performed on animals in a blinded placebo-controlled experiment comparing multipotent adult progenitor cell-derived progenitor cell (MDPC)-treated, phosphate buffered saline (PBS)-injected, and sham-operated rats. Ischemic and non-ischemic limb regions of interest were automatically segmented from time-series images for detecting changes in perfusion and late enhancement. In correlation analysis of the time-signal intensity histograms, the MDPC-treated limbs correlated well with their corresponding non-ischemic limbs. However, the correlation coefficient of the PBS control group was significantly lower than that of the MDPC-treated and sham-operated groups. In semi-quantitative parametric maps of contrast enhancement, there was no significant difference in hypo-enhanced area between the MDPC and PBS groups at early perfusion-dependent time frames. However, the late-enhancement area was significantly larger in the PBS than the MDPC group. The results of this exploratory study show that MDPC-treated rats could be objectively distinguished from PBS controls. The differences were primarily determined by late contrast enhancement of PBS-treated limbs. These computerized methods appear promising for assessing perfusion and late enhancement in dynamic contrast-enhanced MRI.
Tao, Qian; Milles, Julien; Zeppenfeld, Katja; Lamb, Hildo J; Bax, Jeroen J; Reiber, Johan H C; van der Geest, Rob J
2010-08-01
Accurate assessment of the size and distribution of a myocardial infarction (MI) from late gadolinium enhancement (LGE) MRI is of significant prognostic value for postinfarction patients. In this paper, an automatic MI identification method combining both intensity and spatial information is presented in a clear framework of (i) initialization, (ii) false acceptance removal, and (iii) false rejection removal. The method was validated on LGE MR images of 20 chronic postinfarction patients, using manually traced MI contours from two independent observers as reference. Good agreement was observed between automatic and manual MI identification. Validation results showed that the average Dice indices, which describe the percentage of overlap between two regions, were 0.83 +/- 0.07 and 0.79 +/- 0.08 between the automatic identification and the manual tracing from observer 1 and observer 2, and the errors in estimated infarct percentage were 0.0 +/- 1.9% and 3.8 +/- 4.7% compared with observer 1 and observer 2. The difference between the automatic method and manual tracing is in the order of interobserver variation. In conclusion, the developed automatic method is accurate and robust in MI delineation, providing an objective tool for quantitative assessment of MI in LGE MR imaging.
Zheng, Rencheng; Yamabe, Shigeyuki; Nakano, Kimihiko; Suda, Yoshihiro
2015-01-01
Nowadays insight into human-machine interaction is a critical topic with the large-scale development of intelligent vehicles. Biosignal analysis can provide a deeper understanding of driver behaviors that may indicate rationally practical use of the automatic technology. Therefore, this study concentrates on biosignal analysis to quantitatively evaluate mental stress of drivers during automatic driving of trucks, with vehicles set at a closed gap distance apart to reduce air resistance to save energy consumption. By application of two wearable sensor systems, a continuous measurement was realized for palmar perspiration and masseter electromyography, and a biosignal processing method was proposed to assess mental stress levels. In a driving simulator experiment, ten participants completed automatic driving with 4, 8, and 12 m gap distances from the preceding vehicle, and manual driving with about 25 m gap distance as a reference. It was found that mental stress significantly increased when the gap distances decreased, and an abrupt increase in mental stress of drivers was also observed accompanying a sudden change of the gap distance during automatic driving, which corresponded to significantly higher ride discomfort according to subjective reports. PMID:25738768
Torres, Viviana; Cerda, Mauricio; Knaup, Petra; Löpprich, Martin
2016-01-01
An important part of the electronic information available in Hospital Information System (HIS) has the potential to be automatically exported to Electronic Data Capture (EDC) platforms for improving clinical research. This automation has the advantage of reducing manual data transcription, a time consuming and prone to errors process. However, quantitative evaluations of the process of exporting data from a HIS to an EDC system have not been reported extensively, in particular comparing with manual transcription. In this work an assessment to study the quality of an automatic export process, focused in laboratory data from a HIS is presented. Quality of the laboratory data was assessed in two types of processes: (1) a manual process of data transcription, and (2) an automatic process of data transference. The automatic transference was implemented as an Extract, Transform and Load (ETL) process. Then, a comparison was carried out between manual and automatic data collection methods. The criteria to measure data quality were correctness and completeness. The manual process had a general error rate of 2.6% to 7.1%, obtaining the lowest error rate if data fields with a not clear definition were removed from the analysis (p < 10E-3). In the case of automatic process, the general error rate was 1.9% to 12.1%, where lowest error rate is obtained when excluding information missing in the HIS but transcribed to the EDC from other physical sources. The automatic ETL process can be used to collect laboratory data for clinical research if data in the HIS as well as physical documentation not included in HIS, are identified previously and follows a standardized data collection protocol.
Myocardial perfusion assessment with contrast echocardiography
NASA Astrophysics Data System (ADS)
Desco, Manuel; Ledesma-Carbayo, Maria J.; Santos, Andres; Garcia-Fernandez, Miguel A.; Marcos-Alberca, Pedro; Malpica, Norberto; Antoranz, Jose C.; Garcia-Barreno, Pedro
2001-05-01
Assessment of intramyocardial perfusion by contrast echocardiography is a promising new technique that allows to obtain quantitative parameters for the assessment of ischemic disease. In this work, a new methodology and a software prototype developed for this task are presented. It has been validated with Coherent Contrast Imaging (CCI) images acquired with an Acuson Sequoia scanner. Contrast (Optison microbubbles) is injected continuously during the scan. 150 images are acquired using low mechanical index U/S pulses. A burst of high mechanical index pulses is used to destroy bubbles, thus allowing to detect the contrast wash-in. The stud is performed in two conditions: rest and pharmacologically induced stress. The software developed allows to visualized the study (cine) and to select several ROIs within the heart wall. The position of these ROIs along the cardiac cycle is automatically corrected on the basis of the gradient field, and they can also be manually corrected in case the automatic procedure fails. Time curves are analyzed according to a parametric model that incorporates both contrast inflow rate and cyclic variations. Preliminary clinical results on 80 patients have allowed us to identify normal and pathological patterns and to establish the correlation of quantitative parameters with the real diagnosis.
Automatic anterior chamber angle assessment for HD-OCT images.
Tian, Jing; Marziliano, Pina; Baskaran, Mani; Wong, Hong-Tym; Aung, Tin
2011-11-01
Angle-closure glaucoma is a major blinding eye disease and could be detected by measuring the anterior chamber angle in the human eyes. High-definition OCT (Cirrus HD-OCT) is an emerging noninvasive, high-speed, and high-resolution imaging modality for the anterior segment of the eye. Here, we propose a novel algorithm which automatically detects a new landmark, Schwalbe's line, and measures the anterior chamber angle in the HD-OCT images. The distortion caused by refraction is corrected by dewarping the HD-OCT images, and three biometric measurements are defined to quantitatively assess the anterior chamber angle. The proposed algorithm was tested on 40 HD-OCT images of the eye and provided accurate measurements in about 1 second.
Ostovaneh, Mohammad R; Vavere, Andrea L; Mehra, Vishal C; Kofoed, Klaus F; Matheson, Matthew B; Arbab-Zadeh, Armin; Fujisawa, Yasuko; Schuijf, Joanne D; Rochitte, Carlos E; Scholte, Arthur J; Kitagawa, Kakuya; Dewey, Marc; Cox, Christopher; DiCarli, Marcelo F; George, Richard T; Lima, Joao A C
To determine the diagnostic accuracy of semi-automatic quantitative metrics compared to expert reading for interpretation of computed tomography perfusion (CTP) imaging. The CORE320 multicenter diagnostic accuracy clinical study enrolled patients between 45 and 85 years of age who were clinically referred for invasive coronary angiography (ICA). Computed tomography angiography (CTA), CTP, single photon emission computed tomography (SPECT), and ICA images were interpreted manually in blinded core laboratories by two experienced readers. Additionally, eight quantitative CTP metrics as continuous values were computed semi-automatically from myocardial and blood attenuation and were combined using logistic regression to derive a final quantitative CTP metric score. For the reference standard, hemodynamically significant coronary artery disease (CAD) was defined as a quantitative ICA stenosis of 50% or greater and a corresponding perfusion defect by SPECT. Diagnostic accuracy was determined by area under the receiver operating characteristic curve (AUC). Of the total 377 included patients, 66% were male, median age was 62 (IQR: 56, 68) years, and 27% had prior myocardial infarction. In patient based analysis, the AUC (95% CI) for combined CTA-CTP expert reading and combined CTA-CTP semi-automatic quantitative metrics was 0.87(0.84-0.91) and 0.86 (0.83-0.9), respectively. In vessel based analyses the AUC's were 0.85 (0.82-0.88) and 0.84 (0.81-0.87), respectively. No significant difference in AUC was found between combined CTA-CTP expert reading and CTA-CTP semi-automatic quantitative metrics in patient based or vessel based analyses(p > 0.05 for all). Combined CTA-CTP semi-automatic quantitative metrics is as accurate as CTA-CTP expert reading to detect hemodynamically significant CAD. Copyright © 2018 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
de Souza Baptista, Roberto; Bo, Antonio P L; Hayashibe, Mitsuhiro
2017-06-01
Performance assessment of human movement is critical in diagnosis and motor-control rehabilitation. Recent developments in portable sensor technology enable clinicians to measure spatiotemporal aspects to aid in the neurological assessment. However, the extraction of quantitative information from such measurements is usually done manually through visual inspection. This paper presents a novel framework for automatic human movement assessment that executes segmentation and motor performance parameter extraction in time-series of measurements from a sequence of human movements. We use the elements of a Switching Linear Dynamic System model as building blocks to translate formal definitions and procedures from human movement analysis. Our approach provides a method for users with no expertise in signal processing to create models for movements using labeled dataset and later use it for automatic assessment. We validated our framework on preliminary tests involving six healthy adult subjects that executed common movements in functional tests and rehabilitation exercise sessions, such as sit-to-stand and lateral elevation of the arms and five elderly subjects, two of which with limited mobility, that executed the sit-to-stand movement. The proposed method worked on random motion sequences for the dual purpose of movement segmentation (accuracy of 72%-100%) and motor performance assessment (mean error of 0%-12%).
Klapsing, Philipp; Herrmann, Peter; Quintel, Michael; Moerer, Onnen
2017-12-01
Quantitative lung computed tomographic (CT) analysis yields objective data regarding lung aeration but is currently not used in clinical routine primarily because of the labor-intensive process of manual CT segmentation. Automatic lung segmentation could help to shorten processing times significantly. In this study, we assessed bias and precision of lung CT analysis using automatic segmentation compared with manual segmentation. In this monocentric clinical study, 10 mechanically ventilated patients with mild to moderate acute respiratory distress syndrome were included who had received lung CT scans at 5- and 45-mbar airway pressure during a prior study. Lung segmentations were performed both automatically using a computerized algorithm and manually. Automatic segmentation yielded similar lung volumes compared with manual segmentation with clinically minor differences both at 5 and 45 mbar. At 5 mbar, results were as follows: overdistended lung 49.58mL (manual, SD 77.37mL) and 50.41mL (automatic, SD 77.3mL), P=.028; normally aerated lung 2142.17mL (manual, SD 1131.48mL) and 2156.68mL (automatic, SD 1134.53mL), P = .1038; and poorly aerated lung 631.68mL (manual, SD 196.76mL) and 646.32mL (automatic, SD 169.63mL), P = .3794. At 45 mbar, values were as follows: overdistended lung 612.85mL (manual, SD 449.55mL) and 615.49mL (automatic, SD 451.03mL), P=.078; normally aerated lung 3890.12mL (manual, SD 1134.14mL) and 3907.65mL (automatic, SD 1133.62mL), P = .027; and poorly aerated lung 413.35mL (manual, SD 57.66mL) and 469.58mL (automatic, SD 70.14mL), P=.007. Bland-Altman analyses revealed the following mean biases and limits of agreement at 5 mbar for automatic vs manual segmentation: overdistended lung +0.848mL (±2.062mL), normally aerated +14.51mL (±49.71mL), and poorly aerated +14.64mL (±98.16mL). At 45 mbar, results were as follows: overdistended +2.639mL (±8.231mL), normally aerated 17.53mL (±41.41mL), and poorly aerated 56.23mL (±100.67mL). Automatic single CT image and whole lung segmentation were faster than manual segmentation (0.17 vs 125.35seconds [P<.0001] and 10.46 vs 7739.45seconds [P<.0001]). Automatic lung CT segmentation allows fast analysis of aerated lung regions. A reduction of processing times by more than 99% allows the use of quantitative CT at the bedside. Copyright © 2016 Elsevier Inc. All rights reserved.
Twelve example local data support files are automatically downloaded when the SDMProjectBuilder is installed on a computer. They allow the user to modify values to parameters that impact the release, migration, fate, and transport of microbes within a watershed, and control delin...
Twelve example local data support files are automatically downloaded when the SDMProjectBuilder is installed on a computer. They allow the user to modify values to parameters that impact the release, migration, fate, and transport of microbes within a watershed, and control delin...
Quantitative topographic differentiation of the neonatal EEG.
Paul, Karel; Krajca, Vladimír; Roth, Zdenek; Melichar, Jan; Petránek, Svojmil
2006-09-01
To test the discriminatory topographic potential of a new method of the automatic EEG analysis in neonates. A quantitative description of the neonatal EEG can contribute to the objective assessment of the functional state of the brain, and may improve the precision of diagnosing cerebral dysfunctions manifested by 'disorganization', 'dysrhythmia' or 'dysmaturity'. 21 healthy, full-term newborns were examined polygraphically during sleep (EEG-8 referential derivations, respiration, ECG, EOG, EMG). From each EEG record, two 5-min samples (one from the middle of quiet sleep, the other from the middle of active sleep) were subject to subsequent automatic analysis and were described by 13 variables: spectral features and features describing shape and variability of the signal. The data from individual infants were averaged and the number of variables was reduced by factor analysis. All factors identified by factor analysis were statistically significantly influenced by the location of derivation. A large number of statistically significant differences were also established when comparing the effects of individual derivations on each of the 13 measured variables. Both spectral features and features describing shape and variability of the signal are largely accountable for the topographic differentiation of the neonatal EEG. The presented method of the automatic EEG analysis is capable to assess the topographic characteristics of the neonatal EEG, and it is adequately sensitive and describes the neonatal electroencephalogram with sufficient precision. The discriminatory capability of the used method represents a promise for their application in the clinical practice.
Wengert, Georg Johannes; Helbich, Thomas H; Vogl, Wolf-Dieter; Baltzer, Pascal; Langs, Georg; Weber, Michael; Bogner, Wolfgang; Gruber, Stephan; Trattnig, Siegfried; Pinker, Katja
2015-02-01
The purposes of this study were to introduce and assess an automated user-independent quantitative volumetric (AUQV) breast density (BD) measurement system on the basis of magnetic resonance imaging (MRI) using the Dixon technique as well as to compare it with qualitative and quantitative mammographic (MG) BD measurements. Forty-three women with normal mammogram results (Breast Imaging Reporting and Data System 1) were included in this institutional review board-approved prospective study. All participants were subjected to BD assessment with MRI using the following sequence with the Dixon technique (echo time/echo time, 6 milliseconds/2.45 milliseconds/2.67 milliseconds; 1-mm isotropic; 3 minutes 38 seconds). To test the reproducibility, a second MRI after patient repositioning was performed. The AUQV magnetic resonance (MR) BD measurement system automatically calculated percentage (%) BD. The qualitative BD assessment was performed using the American College of Radiology Breast Imaging Reporting and Data System BD categories. Quantitative BD was estimated semiautomatically using the thresholding technique Cumulus4. Appropriate statistical tests were used to assess the agreement between the AUQV MR measurements and to compare them with qualitative and quantitative MG BD estimations. The AUQV MR BD measurements were successfully performed in all 43 women. There was a nearly perfect agreement of AUQV MR BD measurements between the 2 MR examinations for % BD (P < 0.001; intraclass correlation coefficient, 0.998) with no significant differences (P = 0.384). The AUQV MR BD measurements were significantly lower than quantitative and qualitative MG BD assessment (P < 0.001). The AUQV MR BD measurement system allows a fully automated, user-independent, robust, reproducible, as well as radiation- and compression-free volumetric quantitative BD assessment through different levels of BD. The AUQV MR BD measurements were significantly lower than the currently used qualitative and quantitative MG-based approaches, implying that the current assessment might overestimate breast density with MG.
GUIDOS: tools for the assessment of pattern, connectivity, and fragmentation
NASA Astrophysics Data System (ADS)
Vogt, Peter
2013-04-01
Pattern, connectivity, and fragmentation can be considered as pillars for a quantitative analysis of digital landscape images. The free software toolbox GUIDOS (http://forest.jrc.ec.europa.eu/download/software/guidos) includes a variety of dedicated methodologies for the quantitative assessment of these features. Amongst others, Morphological Spatial Pattern Analysis (MSPA) is used for an intuitive description of image pattern structures and the automatic detection of connectivity pathways. GUIDOS includes tools for the detection and quantitative assessment of key nodes and links as well as to define connectedness in raster images and to setup appropriate input files for an enhanced network analysis using Conefor Sensinode. Finally, fragmentation is usually defined from a species point of view but a generic and quantifiable indicator is needed to measure fragmentation and its changes. Some preliminary results for different conceptual approaches will be shown for a sample dataset. Complemented by pre- and post-processing routines and a complete GIS environment the portable GUIDOS Toolbox may facilitate a holistic assessment in risk assessment studies, landscape planning, and conservation/restoration policies. Alternatively, individual analysis components may contribute to or enhance studies conducted with other software packages in landscape ecology.
Automatic Gleason grading of prostate cancer using quantitative phase imaging and machine learning
NASA Astrophysics Data System (ADS)
Nguyen, Tan H.; Sridharan, Shamira; Macias, Virgilia; Kajdacsy-Balla, Andre; Melamed, Jonathan; Do, Minh N.; Popescu, Gabriel
2017-03-01
We present an approach for automatic diagnosis of tissue biopsies. Our methodology consists of a quantitative phase imaging tissue scanner and machine learning algorithms to process these data. We illustrate the performance by automatic Gleason grading of prostate specimens. The imaging system operates on the principle of interferometry and, as a result, reports on the nanoscale architecture of the unlabeled specimen. We use these data to train a random forest classifier to learn textural behaviors of prostate samples and classify each pixel in the image into different classes. Automatic diagnosis results were computed from the segmented regions. By combining morphological features with quantitative information from the glands and stroma, logistic regression was used to discriminate regions with Gleason grade 3 versus grade 4 cancer in prostatectomy tissue. The overall accuracy of this classification derived from a receiver operating curve was 82%, which is in the range of human error when interobserver variability is considered. We anticipate that our approach will provide a clinically objective and quantitative metric for Gleason grading, allowing us to corroborate results across instruments and laboratories and feed the computer algorithms for improved accuracy.
Security Events and Vulnerability Data for Cybersecurity Risk Estimation.
Allodi, Luca; Massacci, Fabio
2017-08-01
Current industry standards for estimating cybersecurity risk are based on qualitative risk matrices as opposed to quantitative risk estimates. In contrast, risk assessment in most other industry sectors aims at deriving quantitative risk estimations (e.g., Basel II in Finance). This article presents a model and methodology to leverage on the large amount of data available from the IT infrastructure of an organization's security operation center to quantitatively estimate the probability of attack. Our methodology specifically addresses untargeted attacks delivered by automatic tools that make up the vast majority of attacks in the wild against users and organizations. We consider two-stage attacks whereby the attacker first breaches an Internet-facing system, and then escalates the attack to internal systems by exploiting local vulnerabilities in the target. Our methodology factors in the power of the attacker as the number of "weaponized" vulnerabilities he/she can exploit, and can be adjusted to match the risk appetite of the organization. We illustrate our methodology by using data from a large financial institution, and discuss the significant mismatch between traditional qualitative risk assessments and our quantitative approach. © 2017 Society for Risk Analysis.
Stewart, Ethan L; Hagerty, Christina H; Mikaberidze, Alexey; Mundt, Christopher C; Zhong, Ziming; McDonald, Bruce A
2016-07-01
Zymoseptoria tritici causes Septoria tritici blotch (STB) on wheat. An improved method of quantifying STB symptoms was developed based on automated analysis of diseased leaf images made using a flatbed scanner. Naturally infected leaves (n = 949) sampled from fungicide-treated field plots comprising 39 wheat cultivars grown in Switzerland and 9 recombinant inbred lines (RIL) grown in Oregon were included in these analyses. Measures of quantitative resistance were percent leaf area covered by lesions, pycnidia size and gray value, and pycnidia density per leaf and lesion. These measures were obtained automatically with a batch-processing macro utilizing the image-processing software ImageJ. All phenotypes in both locations showed a continuous distribution, as expected for a quantitative trait. The trait distributions at both sites were largely overlapping even though the field and host environments were quite different. Cultivars and RILs could be assigned to two or more statistically different groups for each measured phenotype. Traditional visual assessments of field resistance were highly correlated with quantitative resistance measures based on image analysis for the Oregon RILs. These results show that automated image analysis provides a promising tool for assessing quantitative resistance to Z. tritici under field conditions.
Application of automatic image analysis in wood science
Charles W. McMillin
1982-01-01
In this paper I describe an image analysis system and illustrate with examples the application of automatic quantitative measurement to wood science. Automatic image analysis, a powerful and relatively new technology, uses optical, video, electronic, and computer components to rapidly derive information from images with minimal operator interaction. Such instruments...
Montanini, R; Freni, F; Rossi, G L
2012-09-01
This paper reports one of the first experimental results on the application of ultrasound activated lock-in vibrothermography for quantitative assessment of buried flaws in complex cast parts. The use of amplitude modulated ultrasonic heat generation allowed selective response of defective areas within the part, as the defect itself is turned into a local thermal wave emitter. Quantitative evaluation of hidden damages was accomplished by estimating independently both the area and the depth extension of the buried flaws, while x-ray 3D computed tomography was used as reference for sizing accuracy assessment. To retrieve flaw's area, a simple yet effective histogram-based phase image segmentation algorithm with automatic pixels classification has been developed. A clear correlation was found between the thermal (phase) signature measured by the infrared camera on the target surface and the actual mean cross-section area of the flaw. Due to the very fast cycle time (<30 s/part), the method could potentially be applied for 100% quality control of casting components.
Yao, Y; Nguyen, T D; Pandya, S; Zhang, Y; Hurtado Rúa, S; Kovanlikaya, I; Kuceyeski, A; Liu, Z; Wang, Y; Gauthier, S A
2018-02-01
A hyperintense rim on susceptibility in chronic MS lesions is consistent with iron deposition, and the purpose of this study was to quantify iron-related myelin damage within these lesions as compared with those without rim. Forty-six patients had 2 longitudinal quantitative susceptibility mapping with automatic zero reference scans with a mean interval of 28.9 ± 11.4 months. Myelin water fraction mapping by using fast acquisition with spiral trajectory and T2 prep was obtained at the second time point to measure myelin damage. Mixed-effects models were used to assess lesion quantitative susceptibility mapping and myelin water fraction values. Quantitative susceptibility mapping scans were on average 6.8 parts per billion higher in 116 rim-positive lesions compared with 441 rim-negative lesions ( P < .001). All rim-positive lesions retained a hyperintense rim over time, with increasing quantitative susceptibility mapping values of both the rim and core regions ( P < .001). Quantitative susceptibility mapping scans and myelin water fraction in rim-positive lesions decreased from rim to core, which is consistent with rim iron deposition. Whole lesion myelin water fractions for rim-positive and rim-negative lesions were 0.055 ± 0.07 and 0.066 ± 0.04, respectively. In the mixed-effects model, rim-positive lesions had on average 0.01 lower myelin water fraction compared with rim-negative lesions ( P < .001). The volume of the rim at the initial quantitative susceptibility mapping scan was negatively associated with follow-up myelin water fraction ( P < .01). Quantitative susceptibility mapping rim-positive lesions maintained a hyperintense rim, increased in susceptibility, and had more myelin damage compared with rim-negative lesions. Our results are consistent with the identification of chronic active MS lesions and may provide a target for therapeutic interventions to reduce myelin damage. © 2018 by American Journal of Neuroradiology.
Forkert, N D; Cheng, B; Kemmling, A; Thomalla, G; Fiehler, J
2014-01-01
The objective of this work is to present the software tool ANTONIA, which has been developed to facilitate a quantitative analysis of perfusion-weighted MRI (PWI) datasets in general as well as the subsequent multi-parametric analysis of additional datasets for the specific purpose of acute ischemic stroke patient dataset evaluation. Three different methods for the analysis of DSC or DCE PWI datasets are currently implemented in ANTONIA, which can be case-specifically selected based on the study protocol. These methods comprise a curve fitting method as well as a deconvolution-based and deconvolution-free method integrating a previously defined arterial input function. The perfusion analysis is extended for the purpose of acute ischemic stroke analysis by additional methods that enable an automatic atlas-based selection of the arterial input function, an analysis of the perfusion-diffusion and DWI-FLAIR mismatch as well as segmentation-based volumetric analyses. For reliability evaluation, the described software tool was used by two observers for quantitative analysis of 15 datasets from acute ischemic stroke patients to extract the acute lesion core volume, FLAIR ratio, perfusion-diffusion mismatch volume with manually as well as automatically selected arterial input functions, and follow-up lesion volume. The results of this evaluation revealed that the described software tool leads to highly reproducible results for all parameters if the automatic arterial input function selection method is used. Due to the broad selection of processing methods that are available in the software tool, ANTONIA is especially helpful to support image-based perfusion and acute ischemic stroke research projects.
Optimization and automation of quantitative NMR data extraction.
Bernstein, Michael A; Sýkora, Stan; Peng, Chen; Barba, Agustín; Cobas, Carlos
2013-06-18
NMR is routinely used to quantitate chemical species. The necessary experimental procedures to acquire quantitative data are well-known, but relatively little attention has been applied to data processing and analysis. We describe here a robust expert system that can be used to automatically choose the best signals in a sample for overall concentration determination and determine analyte concentration using all accepted methods. The algorithm is based on the complete deconvolution of the spectrum which makes it tolerant of cases where signals are very close to one another and includes robust methods for the automatic classification of NMR resonances and molecule-to-spectrum multiplets assignments. With the functionality in place and optimized, it is then a relatively simple matter to apply the same workflow to data in a fully automatic way. The procedure is desirable for both its inherent performance and applicability to NMR data acquired for very large sample sets.
Automatic spatiotemporal matching of detected pleural thickenings
NASA Astrophysics Data System (ADS)
Chaisaowong, Kraisorn; Keller, Simon Kai; Kraus, Thomas
2014-01-01
Pleural thickenings can be found in asbestos exposed patient's lung. Non-invasive diagnosis including CT imaging can detect aggressive malignant pleural mesothelioma in its early stage. In order to create a quantitative documentation of automatic detected pleural thickenings over time, the differences in volume and thickness of the detected thickenings have to be calculated. Physicians usually estimate the change of each thickening via visual comparison which provides neither quantitative nor qualitative measures. In this work, automatic spatiotemporal matching techniques of the detected pleural thickenings at two points of time based on the semi-automatic registration have been developed, implemented, and tested so that the same thickening can be compared fully automatically. As result, the application of the mapping technique using the principal components analysis turns out to be advantageous than the feature-based mapping using centroid and mean Hounsfield Units of each thickening, since the resulting sensitivity was improved to 98.46% from 42.19%, while the accuracy of feature-based mapping is only slightly higher (84.38% to 76.19%).
Sarker, Abeed; O'Connor, Karen; Ginn, Rachel; Scotch, Matthew; Smith, Karen; Malone, Dan; Gonzalez, Graciela
2016-03-01
Prescription medication overdose is the fastest growing drug-related problem in the USA. The growing nature of this problem necessitates the implementation of improved monitoring strategies for investigating the prevalence and patterns of abuse of specific medications. Our primary aims were to assess the possibility of utilizing social media as a resource for automatic monitoring of prescription medication abuse and to devise an automatic classification technique that can identify potentially abuse-indicating user posts. We collected Twitter user posts (tweets) associated with three commonly abused medications (Adderall(®), oxycodone, and quetiapine). We manually annotated 6400 tweets mentioning these three medications and a control medication (metformin) that is not the subject of abuse due to its mechanism of action. We performed quantitative and qualitative analyses of the annotated data to determine whether posts on Twitter contain signals of prescription medication abuse. Finally, we designed an automatic supervised classification technique to distinguish posts containing signals of medication abuse from those that do not and assessed the utility of Twitter in investigating patterns of abuse over time. Our analyses show that clear signals of medication abuse can be drawn from Twitter posts and the percentage of tweets containing abuse signals are significantly higher for the three case medications (Adderall(®): 23 %, quetiapine: 5.0 %, oxycodone: 12 %) than the proportion for the control medication (metformin: 0.3 %). Our automatic classification approach achieves 82 % accuracy overall (medication abuse class recall: 0.51, precision: 0.41, F measure: 0.46). To illustrate the utility of automatic classification, we show how the classification data can be used to analyze abuse patterns over time. Our study indicates that social media can be a crucial resource for obtaining abuse-related information for medications, and that automatic approaches involving supervised classification and natural language processing hold promises for essential future monitoring and intervention tasks.
A novel automatic method for monitoring Tourette motor tics through a wearable device.
Bernabei, Michel; Preatoni, Ezio; Mendez, Martin; Piccini, Luca; Porta, Mauro; Andreoni, Giuseppe
2010-09-15
The aim of this study was to propose a novel automatic method for quantifying motor-tics caused by the Tourette Syndrome (TS). In this preliminary report, the feasibility of the monitoring process was tested over a series of standard clinical trials in a population of 12 subjects affected by TS. A wearable instrument with an embedded three-axial accelerometer was used to detect and classify motor tics during standing and walking activities. An algorithm was devised to analyze acceleration data by: eliminating noise; detecting peaks connected to pathological events; and classifying intensity and frequency of motor tics into quantitative scores. These indexes were compared with the video-based ones provided by expert clinicians, which were taken as the gold-standard. Sensitivity, specificity, and accuracy of tic detection were estimated, and an agreement analysis was performed through the least square regression and the Bland-Altman test. The tic recognition algorithm showed sensitivity = 80.8% ± 8.5% (mean ± SD), specificity = 75.8% ± 17.3%, and accuracy = 80.5% ± 12.2%. The agreement study showed that automatic detection tended to overestimate the number of tics occurred. Although, it appeared this may be a systematic error due to the different recognition principles of the wearable and video-based systems. Furthermore, there was substantial concurrency with the gold-standard in estimating the severity indexes. The proposed methodology gave promising performances in terms of automatic motor-tics detection and classification in a standard clinical context. The system may provide physicians with a quantitative aid for TS assessment. Further developments will focus on the extension of its application to everyday long-term monitoring out of clinical environments. © 2010 Movement Disorder Society.
Effect of normal aging and of Alzheimer's disease on, episodic memory.
Le Moal, S; Reymann, J M; Thomas, V; Cattenoz, C; Lieury, A; Allain, H
1997-01-01
Performances of 12 patients with Alzheimer's disease (AD), 15 healthy elderly subjects and 20 young healthy volunteers were compared on two episodic memory tests. The first, a learning test of semantically related words, enabled an assessment of the effect of semantic relationships on word learning by controlling the encoding and retrieval processes. The second, a dual coding test, is about the assessment of automatic processes operating during drawings encoding. The results obtained demonstrated quantitative and qualitative differences between the population. Manifestations of episodic memory deficit in AD patients were shown not only by lower performance scores than in elderly controls, but also by the lack of any effect of semantic cues and the production of a large number of extra-list intrusions. Automatic processes underlying dual coding appear to be spared in AD, although more time is needed to process information than in young or elderly subjects. These findings confirm former data and emphasize the preservation of certain memory processes (dual coding) in AD which could be used in future therapeutic approaches.
Semi-quantitative assessment of pulmonary perfusion in children using dynamic contrast-enhanced MRI
NASA Astrophysics Data System (ADS)
Fetita, Catalin; Thong, William E.; Ou, Phalla
2013-03-01
This paper addresses the study of semi-quantitative assessment of pulmonary perfusion acquired from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) in a study population mainly composed of children with pulmonary malformations. The automatic analysis approach proposed is based on the indicator-dilution theory introduced in 1954. First, a robust method is developed to segment the pulmonary artery and the lungs from anatomical MRI data, exploiting 2D and 3D mathematical morphology operators. Second, the time-dependent contrast signal of the lung regions is deconvolved by the arterial input function for the assessment of the local hemodynamic system parameters, ie. mean transit time, pulmonary blood volume and pulmonary blood flow. The discrete deconvolution method implements here a truncated singular value decomposition (tSVD) method. Parametric images for the entire lungs are generated as additional elements for diagnosis and quantitative follow-up. The preliminary results attest the feasibility of perfusion quantification in pulmonary DCE-MRI and open an interesting alternative to scintigraphy for this type of evaluation, to be considered at least as a preliminary decision in the diagnostic due to the large availability of the technique and to the non-invasive aspects.
ERIC Educational Resources Information Center
Hamade, Rachel; Hewlett, Nigel; Scanlon, Emer
2006-01-01
This study aimed to evaluate a new automatic tracheostoma valve: the Provox FreeHands HME (manufactured by Atos Medical AB, Sweden). Data from four laryngectomee participants using automatic and also manual occlusion were subjected to acoustic and perceptual analysis. The main results were a significant decrease, from the manual to automatic…
Nguyen, Thanh; Bui, Vy; Lam, Van; Raub, Christopher B; Chang, Lin-Ching; Nehmetallah, George
2017-06-26
We propose a fully automatic technique to obtain aberration free quantitative phase imaging in digital holographic microscopy (DHM) based on deep learning. The traditional DHM solves the phase aberration compensation problem by manually detecting the background for quantitative measurement. This would be a drawback in real time implementation and for dynamic processes such as cell migration phenomena. A recent automatic aberration compensation approach using principle component analysis (PCA) in DHM avoids human intervention regardless of the cells' motion. However, it corrects spherical/elliptical aberration only and disregards the higher order aberrations. Traditional image segmentation techniques can be employed to spatially detect cell locations. Ideally, automatic image segmentation techniques make real time measurement possible. However, existing automatic unsupervised segmentation techniques have poor performance when applied to DHM phase images because of aberrations and speckle noise. In this paper, we propose a novel method that combines a supervised deep learning technique with convolutional neural network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations.
Aquila, Iolanda; González, Ariana; Fernández-Golfín, Covadonga; Rincón, Luis Miguel; Casas, Eduardo; García, Ana; Hinojar, Rocio; Jiménez-Nacher, José Julio; Zamorano, José Luis
2016-05-17
3D transesophageal echocardiography (TEE) is superior to 2D TEE in quantitative anatomic evaluation of the mitral valve (MV) but it shows limitations regarding automatic quantification. Here, we tested the inter-/intra-observer reproducibility of a novel full-automated software in the evaluation of MV anatomy compared to manual 3D assessment. Thirty-six out of 61 screened patients referred to our Cardiac Imaging Unit for TEE were retrospectively included. 3D TEE analysis was performed both manually and with the automated software by two independent operators. Mitral annular area, intercommissural distance, anterior leaflet length and posterior leaflet length were assessed. A significant correlation between both methods was found for all variables: intercommissural diameter (r = 0.84, p < 0.01), mitral annular area (r = 0.94, p > 0, 01), anterior leaflet length (r = 0.83, p < 0.01) and posterior leaflet length (r = 0.67, p < 0.01). Interobserver variability assessed by the intraclass correlation coefficient was superior for the automatic software: intercommisural distance 0.997 vs. 0.76; mitral annular area 0.957 vs. 0.858; anterior leaflet length 0.963 vs. 0.734 and posterior leaflet length 0.936 vs. 0.838. Intraobserver variability was good for both methods with a better level of agreement with the automatic software. The novel 3D automated software is reproducible in MV anatomy assessment. The incorporation of this new tool in clinical MV assessment may improve patient selection and outcomes for MV interventions as well as patient diagnosis and prognosis stratification. Yet, high-quality 3D images are indispensable.
NASA Astrophysics Data System (ADS)
Luo, Yun-Gang; Ko, Jacky Kl; Shi, Lin; Guan, Yuefeng; Li, Linong; Qin, Jing; Heng, Pheng-Ann; Chu, Winnie Cw; Wang, Defeng
2015-07-01
Myocardial iron loading thalassemia patients could be identified using T2* magnetic resonance images (MRI). To quantitatively assess cardiac iron loading, we proposed an effective algorithm to segment aligned free induction decay sequential myocardium images based on morphological operations and geodesic active contour (GAC). Nine patients with thalassemia major were recruited (10 male and 16 female) to undergo a thoracic MRI scan in the short axis view. Free induction decay images were registered for T2* mapping. The GAC were utilized to segment aligned MR images with a robust initialization. Segmented myocardium regions were divided into sectors for a region-based quantification of cardiac iron loading. Our proposed automatic segmentation approach achieve a true positive rate at 84.6% and false positive rate at 53.8%. The area difference between manual and automatic segmentation was 25.5% after 1000 iterations. Results from T2* analysis indicated that regions with intensity lower than 20 ms were suffered from heavy iron loading in thalassemia major patients. The proposed method benefited from abundant edge information of the free induction decay sequential MRI. Experiment results demonstrated that the proposed method is feasible in myocardium segmentation and was clinically applicable to measure myocardium iron loading.
Colombi, Davide; Dinkel, Julien; Weinheimer, Oliver; Obermayer, Berenike; Buzan, Teodora; Nabers, Diana; Bauer, Claudia; Oltmanns, Ute; Palmowski, Karin; Herth, Felix; Kauczor, Hans Ulrich; Sverzellati, Nicola
2015-01-01
Objectives To describe changes over time in extent of idiopathic pulmonary fibrosis (IPF) at multidetector computed tomography (MDCT) assessed by semi-quantitative visual scores (VSs) and fully automatic histogram-based quantitative evaluation and to test the relationship between these two methods of quantification. Methods Forty IPF patients (median age: 70 y, interquartile: 62-75 years; M:F, 33:7) that underwent 2 MDCT at different time points with a median interval of 13 months (interquartile: 10-17 months) were retrospectively evaluated. In-house software YACTA quantified automatically lung density histogram (10th-90th percentile in 5th percentile steps). Longitudinal changes in VSs and in the percentiles of attenuation histogram were obtained in 20 untreated patients and 20 patients treated with pirfenidone. Pearson correlation analysis was used to test the relationship between VSs and selected percentiles. Results In follow-up MDCT, visual overall extent of parenchymal abnormalities (OE) increased in median by 5 %/year (interquartile: 0 %/y; +11 %/y). Substantial difference was found between treated and untreated patients in HU changes of the 40th and of the 80th percentiles of density histogram. Correlation analysis between VSs and selected percentiles showed higher correlation between the changes (Δ) in OE and Δ 40th percentile (r=0.69; p<0.001) as compared to Δ 80th percentile (r=0.58; p<0.001); closer correlation was found between Δ ground-glass extent and Δ 40th percentile (r=0.66, p<0.001) as compared to Δ 80th percentile (r=0.47, p=0.002), while the Δ reticulations correlated better with the Δ 80th percentile (r=0.56, p<0.001) in comparison to Δ 40th percentile (r=0.43, p=0.003). Conclusions There is a relevant and fully automatically measurable difference at MDCT in VSs and in histogram analysis at one year follow-up of IPF patients, whether treated or untreated: Δ 40th percentile might reflect the change in overall extent of lung abnormalities, notably of ground-glass pattern; furthermore Δ 80th percentile might reveal the course of reticular opacities. PMID:26110421
Geraghty, John P; Grogan, Garry; Ebert, Martin A
2013-04-30
This study investigates the variation in segmentation of several pelvic anatomical structures on computed tomography (CT) between multiple observers and a commercial automatic segmentation method, in the context of quality assurance and evaluation during a multicentre clinical trial. CT scans of two prostate cancer patients ('benchmarking cases'), one high risk (HR) and one intermediate risk (IR), were sent to multiple radiotherapy centres for segmentation of prostate, rectum and bladder structures according to the TROG 03.04 "RADAR" trial protocol definitions. The same structures were automatically segmented using iPlan software for the same two patients, allowing structures defined by automatic segmentation to be quantitatively compared with those defined by multiple observers. A sample of twenty trial patient datasets were also used to automatically generate anatomical structures for quantitative comparison with structures defined by individual observers for the same datasets. There was considerable agreement amongst all observers and automatic segmentation of the benchmarking cases for bladder (mean spatial variations < 0.4 cm across the majority of image slices). Although there was some variation in interpretation of the superior-inferior (cranio-caudal) extent of rectum, human-observer contours were typically within a mean 0.6 cm of automatically-defined contours. Prostate structures were more consistent for the HR case than the IR case with all human observers segmenting a prostate with considerably more volume (mean +113.3%) than that automatically segmented. Similar results were seen across the twenty sample datasets, with disagreement between iPlan and observers dominant at the prostatic apex and superior part of the rectum, which is consistent with observations made during quality assurance reviews during the trial. This study has demonstrated quantitative analysis for comparison of multi-observer segmentation studies. For automatic segmentation algorithms based on image-registration as in iPlan, it is apparent that agreement between observer and automatic segmentation will be a function of patient-specific image characteristics, particularly for anatomy with poor contrast definition. For this reason, it is suggested that automatic registration based on transformation of a single reference dataset adds a significant systematic bias to the resulting volumes and their use in the context of a multicentre trial should be carefully considered.
Geng, Hua; Todd, Naomi M; Devlin-Mullin, Aine; Poologasundarampillai, Gowsihan; Kim, Taek Bo; Madi, Kamel; Cartmell, Sarah; Mitchell, Christopher A; Jones, Julian R; Lee, Peter D
2016-06-01
A correlative imaging methodology was developed to accurately quantify bone formation in the complex lattice structure of additive manufactured implants. Micro computed tomography (μCT) and histomorphometry were combined, integrating the best features from both, while demonstrating the limitations of each imaging modality. This semi-automatic methodology registered each modality using a coarse graining technique to speed the registration of 2D histology sections to high resolution 3D μCT datasets. Once registered, histomorphometric qualitative and quantitative bone descriptors were directly correlated to 3D quantitative bone descriptors, such as bone ingrowth and bone contact. The correlative imaging allowed the significant volumetric shrinkage of histology sections to be quantified for the first time (~15 %). This technique demonstrated the importance of location of the histological section, demonstrating that up to a 30 % offset can be introduced. The results were used to quantitatively demonstrate the effectiveness of 3D printed titanium lattice implants.
NASA Astrophysics Data System (ADS)
Murakoshi, Dai; Hirota, Kazuhiro; Ishii, Hiroyasu; Hashimoto, Atsushi; Ebata, Tetsurou; Irisawa, Kaku; Wada, Takatsugu; Hayakawa, Toshiro; Itoh, Kenji; Ishihara, Miya
2018-02-01
Photoacoustic (PA) imaging technology is expected to be applied to clinical assessment for peripheral vascularity. We started a clinical evaluation with the prototype PA imaging system we recently developed. Prototype PA imaging system was composed with in-house Q-switched Alexandrite laser system which emits short-pulsed laser with 750 nm wavelength, handheld ultrasound transducer where illumination optics were integrated and signal processing for PA image reconstruction implemented in the clinical ultrasound (US) system. For the purpose of quantitative assessment of PA images, an image analyzing function has been developed and applied to clinical PA images. In this analyzing function, vascularity derived from PA signal intensity ranged for prescribed threshold was defined as a numerical index of vessel fulfillment and calculated for the prescribed region of interest (ROI). Skin surface was automatically detected by utilizing B-mode image acquired simultaneously with PA image. Skinsurface position is utilized to place the ROI objectively while avoiding unwanted signals such as artifacts which were imposed due to melanin pigment in the epidermal layer which absorbs laser emission and generates strong PA signals. Multiple images were available to support the scanned image set for 3D viewing. PA images for several fingers of patients with systemic sclerosis (SSc) were quantitatively assessed. Since the artifact region is trimmed off in PA images, the visibility of vessels with rather low PA signal intensity on the 3D projection image was enhanced and the reliability of the quantitative analysis was improved.
A preliminary study of DTI Fingerprinting on stroke analysis.
Ma, Heather T; Ye, Chenfei; Wu, Jun; Yang, Pengfei; Chen, Xuhui; Yang, Zhengyi; Ma, Jingbo
2014-01-01
DTI (Diffusion Tensor Imaging) is a well-known MRI (Magnetic Resonance Imaging) technique which provides useful structural information about human brain. However, the quantitative measurement to physiological variation of subtypes of ischemic stroke is not available. An automatically quantitative method for DTI analysis will enhance the DTI application in clinics. In this study, we proposed a DTI Fingerprinting technology to quantitatively analyze white matter tissue, which was applied in stroke classification. The TBSS (Tract Based Spatial Statistics) method was employed to generate mask automatically. To evaluate the clustering performance of the automatic method, lesion ROI (Region of Interest) is manually drawn on the DWI images as a reference. The results from the DTI Fingerprinting were compared with those obtained from the reference ROIs. It indicates that the DTI Fingerprinting could identify different states of ischemic stroke and has promising potential to provide a more comprehensive measure of the DTI data. Further development should be carried out to improve DTI Fingerprinting technology in clinics.
Automatic 3D segmentation of multiphoton images: a key step for the quantification of human skin.
Decencière, Etienne; Tancrède-Bohin, Emmanuelle; Dokládal, Petr; Koudoro, Serge; Pena, Ana-Maria; Baldeweck, Thérèse
2013-05-01
Multiphoton microscopy has emerged in the past decade as a useful noninvasive imaging technique for in vivo human skin characterization. However, it has not been used until now in evaluation clinical trials, mainly because of the lack of specific image processing tools that would allow the investigator to extract pertinent quantitative three-dimensional (3D) information from the different skin components. We propose a 3D automatic segmentation method of multiphoton images which is a key step for epidermis and dermis quantification. This method, based on the morphological watershed and graph cuts algorithms, takes into account the real shape of the skin surface and of the dermal-epidermal junction, and allows separating in 3D the epidermis and the superficial dermis. The automatic segmentation method and the associated quantitative measurements have been developed and validated on a clinical database designed for aging characterization. The segmentation achieves its goals for epidermis-dermis separation and allows quantitative measurements inside the different skin compartments with sufficient relevance. This study shows that multiphoton microscopy associated with specific image processing tools provides access to new quantitative measurements on the various skin components. The proposed 3D automatic segmentation method will contribute to build a powerful tool for characterizing human skin condition. To our knowledge, this is the first 3D approach to the segmentation and quantification of these original images. © 2013 John Wiley & Sons A/S. Published by Blackwell Publishing Ltd.
Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun
2018-05-01
Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.
Potter, Jacqueline N; Hanley, Gregory P; Augustine, Matotopa; Clay, Casey J; Phelps, Meredith C
2013-01-01
Use of automatically reinforced stereotypy as reinforcement has been shown to be successful for increasing socially desirable behaviors in persons with intellectual disabilities (Charlop, Kurtz, & Casey, 1990; Hanley, Iwata, Thompson, & Lindberg, 2000; Hung, 1978). A component analysis of this treatment was conducted with 3 adolescents who had been diagnosed with autism, and then extended by (a) progressively increasing the quantitative and qualitative aspects of the response requirement to earn access to stereotypy, (b) arranging objective measures of client preference for contingent access to stereotypy compared to other relevant treatments for their automatically reinforced stereotypy, and (c) assessing the social validity of this treatment with other relevant stakeholders. Implications for addressing stereotypy and increasing the leisure skills of adolescents with autism are discussed. © Society for the Experimental Analysis of Behavior.
Application of image recognition-based automatic hyphae detection in fungal keratitis.
Wu, Xuelian; Tao, Yuan; Qiu, Qingchen; Wu, Xinyi
2018-03-01
The purpose of this study is to evaluate the accuracy of two methods in diagnosis of fungal keratitis, whereby one method is automatic hyphae detection based on images recognition and the other method is corneal smear. We evaluate the sensitivity and specificity of the method in diagnosis of fungal keratitis, which is automatic hyphae detection based on image recognition. We analyze the consistency of clinical symptoms and the density of hyphae, and perform quantification using the method of automatic hyphae detection based on image recognition. In our study, 56 cases with fungal keratitis (just single eye) and 23 cases with bacterial keratitis were included. All cases underwent the routine inspection of slit lamp biomicroscopy, corneal smear examination, microorganism culture and the assessment of in vivo confocal microscopy images before starting medical treatment. Then, we recognize the hyphae images of in vivo confocal microscopy by using automatic hyphae detection based on image recognition to evaluate its sensitivity and specificity and compare with the method of corneal smear. The next step is to use the index of density to assess the severity of infection, and then find the correlation with the patients' clinical symptoms and evaluate consistency between them. The accuracy of this technology was superior to corneal smear examination (p < 0.05). The sensitivity of the technology of automatic hyphae detection of image recognition was 89.29%, and the specificity was 95.65%. The area under the ROC curve was 0.946. The correlation coefficient between the grading of the severity in the fungal keratitis by the automatic hyphae detection based on image recognition and the clinical grading is 0.87. The technology of automatic hyphae detection based on image recognition was with high sensitivity and specificity, able to identify fungal keratitis, which is better than the method of corneal smear examination. This technology has the advantages when compared with the conventional artificial identification of confocal microscope corneal images, of being accurate, stable and does not rely on human expertise. It was the most useful to the medical experts who are not familiar with fungal keratitis. The technology of automatic hyphae detection based on image recognition can quantify the hyphae density and grade this property. Being noninvasive, it can provide an evaluation criterion to fungal keratitis in a timely, accurate, objective and quantitative manner.
2014-01-01
Introduction Fixed orthodontic appliances, despite years of research and development, still raise a lot of controversy because of its potentially destructive influence on enamel. Therefore, it is necessary to quantitatively assess the condition and therein the thickness of tooth enamel in order to select the appropriate orthodontic bonding and debonding methodology as well as to assess the quality of enamel after treatment and clean-up procedure in order to choose the most advantageous course of treatment. One of the assessment methods is computed tomography where the measurement of enamel thickness and the 3D reconstruction of image sequences can be performed fully automatically. Material and method OCT images of 180 teeth were obtained from the Topcon 3D OCT-2000 camera. The images were obtained in vitro by performing sequentially 7 stages of treatment on all the teeth: before any interference into enamel, polishing with orthodontic paste, etching and application of a bonding system, orthodontic bracket bonding, orthodontic bracket removal, cleaning off adhesive residue. A dedicated method for the analysis and processing of images involving median filtering, mathematical morphology, binarization, polynomial approximation and the active contour method has been proposed. Results The obtained results enable automatic measurement of tooth enamel thickness in 5 seconds using the Core i5 CPU M460 @ 2.5GHz 4GB RAM. For one patient, the proposed method of analysis confirms enamel thickness loss of 80 μm (from 730 ± 165 μm to 650 ± 129 μm) after polishing with paste, enamel thickness loss of 435 μm (from 730 ± 165 μm to 295 ± 55 μm) after etching and bonding resin application, growth of a layer having a thickness of 265 μm (from 295 ± 55 μm to 560 ± 98 μm after etching) which is the adhesive system. After removing an orthodontic bracket, the adhesive residue was 105 μm and after cleaning it off, the enamel thickness was 605 μm. The enamel thickness before and after the whole treatment decreased by about 125 μm. Conclusions This paper presents an automatic quantitative method for the assessment of tooth enamel thickness. This method has proven to be an effective diagnostic tool that allows evaluation of the surface and cross section of tooth enamel after orthodontic treatment with fixed thin-arched braces and proper selection of the methodology and course of treatment. PMID:24755213
Automatic and quantitative measurement of collagen gel contraction using model-guided segmentation
NASA Astrophysics Data System (ADS)
Chen, Hsin-Chen; Yang, Tai-Hua; Thoreson, Andrew R.; Zhao, Chunfeng; Amadio, Peter C.; Sun, Yung-Nien; Su, Fong-Chin; An, Kai-Nan
2013-08-01
Quantitative measurement of collagen gel contraction plays a critical role in the field of tissue engineering because it provides spatial-temporal assessment (e.g., changes of gel area and diameter during the contraction process) reflecting the cell behavior and tissue material properties. So far the assessment of collagen gels relies on manual segmentation, which is time-consuming and suffers from serious intra- and inter-observer variability. In this study, we propose an automatic method combining various image processing techniques to resolve these problems. The proposed method first detects the maximal feasible contraction range of circular references (e.g., culture dish) and avoids the interference of irrelevant objects in the given image. Then, a three-step color conversion strategy is applied to normalize and enhance the contrast between the gel and background. We subsequently introduce a deformable circular model which utilizes regional intensity contrast and circular shape constraint to locate the gel boundary. An adaptive weighting scheme was employed to coordinate the model behavior, so that the proposed system can overcome variations of gel boundary appearances at different contraction stages. Two measurements of collagen gels (i.e., area and diameter) can readily be obtained based on the segmentation results. Experimental results, including 120 gel images for accuracy validation, showed high agreement between the proposed method and manual segmentation with an average dice similarity coefficient larger than 0.95. The results also demonstrated obvious improvement in gel contours obtained by the proposed method over two popular, generic segmentation methods.
NASA Astrophysics Data System (ADS)
Schiavazzi, Daniele E.; Kung, Ethan O.; Dorfman, Adam L.; Hsia, Tain-Yen; Baretta, Alessia; Arbia, Gregory; Marsden, Alison L.
2013-11-01
Congenital heart diseases such as hypoplastic left heart syndrome annually affect about 3% of births in the US alone. Surgical palliation of single ventricle patients is performed in stages. Consequently to the stage 2 surgical procedure or other previous conditions, a stenosis of the left pulmonary artery (LPA) is often observed, raising the clinical question of whether or not it should be treated. The severity of stenoses are commonly assessed through geometric inspection or catheter in-vivo pressure measurements with limited quantitative information about patient-specific physiology. The present study uses a multiscale CFD approach to provide an assessment of the severity of LPA stenoses. A lumped parameter 0D model is used to simulate stage 2 circulation, and parameters are automatically identified accounting for uncertainty in the clinical data available for a cohort of patients. The importance of the latter parameters, whether alone or in groups, is also ranked using forward uncertainty propagation methods. Various stenosis levels are applied to the three-dimensional SVC-PA junction model using a dual mesh-morphing approach. Traditional assessments methodologies are compared to the results of our findings and critically discussed.
Apparatus enables automatic microanalysis of body fluids
NASA Technical Reports Server (NTRS)
Soffen, G. A.; Stuart, J. L.
1966-01-01
Apparatus will automatically and quantitatively determine body fluid constituents which are amenable to analysis by fluorometry or colorimetry. The results of the tests are displayed as percentages of full scale deflection on a strip-chart recorder. The apparatus can also be adapted for microanalysis of various other fluids.
Earle, Paul S.; Wald, David J.; Jaiswal, Kishor S.; Allen, Trevor I.; Hearne, Michael G.; Marano, Kristin D.; Hotovec, Alicia J.; Fee, Jeremy
2009-01-01
Within minutes of a significant earthquake anywhere on the globe, the U.S. Geological Survey (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system assesses its potential societal impact. PAGER automatically estimates the number of people exposed to severe ground shaking and the shaking intensity at affected cities. Accompanying maps of the epicentral region show the population distribution and estimated ground-shaking intensity. A regionally specific comment describes the inferred vulnerability of the regional building inventory and, when available, lists recent nearby earthquakes and their effects. PAGER's results are posted on the USGS Earthquake Program Web site (http://earthquake.usgs.gov/), consolidated in a concise one-page report, and sent in near real-time to emergency responders, government agencies, and the media. Both rapid and accurate results are obtained through manual and automatic updates of PAGER's content in the hours following significant earthquakes. These updates incorporate the most recent estimates of earthquake location, magnitude, faulting geometry, and first-hand accounts of shaking. PAGER relies on a rich set of earthquake analysis and assessment tools operated by the USGS and contributing Advanced National Seismic System (ANSS) regional networks. A focused research effort is underway to extend PAGER's near real-time capabilities beyond population exposure to quantitative estimates of fatalities, injuries, and displaced population.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiely, J Blanco; Olszanski, A; Both, S
Purpose: To develop a quantitative decision making metric for automatically detecting irregular breathing using a large patient population that received phase-sorted 4DCT. Methods: This study employed two patient cohorts. Cohort#1 contained 256 patients who received a phasesorted 4DCT. Cohort#2 contained 86 patients who received three weekly phase-sorted 4DCT scans. A previously published technique used a single abdominal surrogate to calculate the ratio of extreme inhalation tidal volume to normal inhalation tidal volume, referred to as the κ metric. Since a single surrogate is standard for phase-sorted 4DCT in radiation oncology clinical practice, tidal volume was not quantified. Without tidal volume,more » the absolute κ metric could not be determined, so a relative κ (κrel) metric was defined based on the measured surrogate amplitude instead of tidal volume. Receiver operator characteristic (ROC) curves were used to quantitatively determine the optimal cutoff value (jk) and efficiency cutoff value (τk) of κrel to automatically identify irregular breathing that would reduce the image quality of phase-sorted 4DCT. Discriminatory accuracy (area under the ROC curve) of κrel was calculated by a trapezoidal numeric integration technique. Results: The discriminatory accuracy of ?rel was found to be 0.746. The key values of jk and tk were calculated to be 1.45 and 1.72 respectively. For values of ?rel such that jk≤κrel≤τk, the decision to reacquire the 4DCT would be at the discretion of the physician. This accounted for only 11.9% of the patients in this study. The magnitude of κrel held consistent over 3 weeks for 73% of the patients in cohort#3. Conclusion: The decision making metric, ?rel, was shown to be an accurate classifier of irregular breathing patients in a large patient population. This work provided an automatic quantitative decision making metric to quickly and accurately assess the extent to which irregular breathing is occurring during phase-sorted 4DCT.« less
Fully automatic multi-atlas segmentation of CTA for partial volume correction in cardiac SPECT/CT
NASA Astrophysics Data System (ADS)
Liu, Qingyi; Mohy-ud-Din, Hassan; Boutagy, Nabil E.; Jiang, Mingyan; Ren, Silin; Stendahl, John C.; Sinusas, Albert J.; Liu, Chi
2017-05-01
Anatomical-based partial volume correction (PVC) has been shown to improve image quality and quantitative accuracy in cardiac SPECT/CT. However, this method requires manual segmentation of various organs from contrast-enhanced computed tomography angiography (CTA) data. In order to achieve fully automatic CTA segmentation for clinical translation, we investigated the most common multi-atlas segmentation methods. We also modified the multi-atlas segmentation method by introducing a novel label fusion algorithm for multiple organ segmentation to eliminate overlap and gap voxels. To evaluate our proposed automatic segmentation, eight canine 99mTc-labeled red blood cell SPECT/CT datasets that incorporated PVC were analyzed, using the leave-one-out approach. The Dice similarity coefficient of each organ was computed. Compared to the conventional label fusion method, our proposed label fusion method effectively eliminated gaps and overlaps and improved the CTA segmentation accuracy. The anatomical-based PVC of cardiac SPECT images with automatic multi-atlas segmentation provided consistent image quality and quantitative estimation of intramyocardial blood volume, as compared to those derived using manual segmentation. In conclusion, our proposed automatic multi-atlas segmentation method of CTAs is feasible, practical, and facilitates anatomical-based PVC of cardiac SPECT/CT images.
Retinal status analysis method based on feature extraction and quantitative grading in OCT images.
Fu, Dongmei; Tong, Hejun; Zheng, Shuang; Luo, Ling; Gao, Fulin; Minar, Jiri
2016-07-22
Optical coherence tomography (OCT) is widely used in ophthalmology for viewing the morphology of the retina, which is important for disease detection and assessing therapeutic effect. The diagnosis of retinal diseases is based primarily on the subjective analysis of OCT images by trained ophthalmologists. This paper describes an OCT images automatic analysis method for computer-aided disease diagnosis and it is a critical part of the eye fundus diagnosis. This study analyzed 300 OCT images acquired by Optovue Avanti RTVue XR (Optovue Corp., Fremont, CA). Firstly, the normal retinal reference model based on retinal boundaries was presented. Subsequently, two kinds of quantitative methods based on geometric features and morphological features were proposed. This paper put forward a retinal abnormal grading decision-making method which was used in actual analysis and evaluation of multiple OCT images. This paper showed detailed analysis process by four retinal OCT images with different abnormal degrees. The final grading results verified that the analysis method can distinguish abnormal severity and lesion regions. This paper presented the simulation of the 150 test images, where the results of analysis of retinal status showed that the sensitivity was 0.94 and specificity was 0.92.The proposed method can speed up diagnostic process and objectively evaluate the retinal status. This paper aims on studies of retinal status automatic analysis method based on feature extraction and quantitative grading in OCT images. The proposed method can obtain the parameters and the features that are associated with retinal morphology. Quantitative analysis and evaluation of these features are combined with reference model which can realize the target image abnormal judgment and provide a reference for disease diagnosis.
Quantitative Evaluation of Performance during Robot-assisted Treatment.
Peri, E; Biffi, E; Maghini, C; Servodio Iammarrone, F; Gagliardi, C; Germiniasi, C; Pedrocchi, A; Turconi, A C; Reni, G
2016-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Methodologies, Models and Algorithms for Patients Rehabilitation". The great potential of robots in extracting quantitative and meaningful data is not always exploited in clinical practice. The aim of the present work is to describe a simple parameter to assess the performance of subjects during upper limb robotic training exploiting data automatically recorded by the robot, with no additional effort for patients and clinicians. Fourteen children affected by cerebral palsy (CP) performed a training with Armeo®Spring. Each session was evaluated with P, a simple parameter that depends on the overall performance recorded, and median and interquartile values were computed to perform a group analysis. Median (interquartile) values of P significantly increased from 0.27 (0.21) at T0 to 0.55 (0.27) at T1 . This improvement was functionally validated by a significant increase of the Melbourne Assessment of Unilateral Upper Limb Function. The parameter described here was able to show variations in performance over time and enabled a quantitative evaluation of motion abilities in a way that is reliable with respect to a well-known clinical scale.
Cells and Stripes: A novel quantitative photo-manipulation technique
Mistrik, Martin; Vesela, Eva; Furst, Tomas; Hanzlikova, Hana; Frydrych, Ivo; Gursky, Jan; Majera, Dusana; Bartek, Jiri
2016-01-01
Laser micro-irradiation is a technology widely used in the DNA damage response, checkpoint signaling, chromatin remodeling and related research fields, to assess chromatin modifications and recruitment of diverse DNA damage sensors, mediators and repair proteins to sites of DNA lesions. While this approach has aided numerous discoveries related to cell biology, maintenance of genome integrity, aging and cancer, it has so far been limited by a tedious manual definition of laser-irradiated subcellular regions, with the ensuing restriction to only a small number of cells treated and analyzed in a single experiment. Here, we present an improved and versatile alternative to the micro-irradiation approach: Quantitative analysis of photo-manipulated samples using innovative settings of standard laser-scanning microscopes. Up to 200 cells are simultaneously exposed to a laser beam in a defined pattern of collinear rays. The induced striation pattern is then automatically evaluated by a simple algorithm, which provides a quantitative assessment of various laser-induced phenotypes in live or fixed cells. Overall, this new approach represents a more robust alternative to existing techniques, and provides a versatile tool for a wide range of applications in biomedicine. PMID:26777522
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montanini, R.; Freni, F.; Rossi, G. L.
This paper reports one of the first experimental results on the application of ultrasound activated lock-in vibrothermography for quantitative assessment of buried flaws in complex cast parts. The use of amplitude modulated ultrasonic heat generation allowed selective response of defective areas within the part, as the defect itself is turned into a local thermal wave emitter. Quantitative evaluation of hidden damages was accomplished by estimating independently both the area and the depth extension of the buried flaws, while x-ray 3D computed tomography was used as reference for sizing accuracy assessment. To retrieve flaw's area, a simple yet effective histogram-based phasemore » image segmentation algorithm with automatic pixels classification has been developed. A clear correlation was found between the thermal (phase) signature measured by the infrared camera on the target surface and the actual mean cross-section area of the flaw. Due to the very fast cycle time (<30 s/part), the method could potentially be applied for 100% quality control of casting components.« less
A benchmark for comparison of dental radiography analysis algorithms.
Wang, Ching-Wei; Huang, Cheng-Ta; Lee, Jia-Hong; Li, Chung-Hsing; Chang, Sheng-Wei; Siao, Ming-Jhih; Lai, Tat-Ming; Ibragimov, Bulat; Vrtovec, Tomaž; Ronneberger, Olaf; Fischer, Philipp; Cootes, Tim F; Lindner, Claudia
2016-07-01
Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/). Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Ordinal Process Dissociation and the Measurement of Automatic and Controlled Processes
ERIC Educational Resources Information Center
Hirshman, Elliot
2004-01-01
The process-dissociation equations (L. Jacoby, 1991) have been applied to results from inclusion and exclusion tasks to derive quantitative estimates of the influence of controlled and automatic processes on memory. This research has provoked controversies (e.g., T. Curran & D. Hintzman, 1995) regarding the validity of specific assumptions…
García-Martín, Ana; Lázaro-Rivera, Carla; Fernández-Golfín, Covadonga; Salido-Tahoces, Luisa; Moya-Mur, Jose-Luis; Jiménez-Nacher, Jose-Julio; Casas-Rojo, Eduardo; Aquila, Iolanda; González-Gómez, Ariana; Hernández-Antolín, Rosana; Zamorano, José Luis
2016-07-01
A specialized three-dimensional transoesophageal echocardiography (3D-TOE) reconstruction tool has recently been introduced; the system automatically configures a geometric model of the aortic root from the images obtained by 3D-TOE and performs quantitative analysis of these structures. The aim of this study was to compare the measurements of the aortic annulus (AA) obtained by the new model to that obtained by 3D-TOE and multidetector computed tomography (MDCT) in candidates to transcatheter aortic valve implantation (TAVI) and to assess the reproducibility of this new method. We included 31 patients who underwent TAVI. The AA diameters and area were evaluated by the manual 3D-TOE method and by the automatic software. We showed an excellent correlation between the measurements obtained by both methods: intra-class correlation coefficient (ICC): 0.731 (0.508-0.862), r: 0.742 for AA diameter and ICC: 0.723 (0.662-0.923), r: 0.723 for the AA area, with no significant differences regardless of the method used. The interobserver variability was superior for the automatic measurements than for the manual ones. In a subgroup of 10 patients, we also found an excellent correlation between the automatic measurements and those obtained by MDCT, ICC: 0.941 (0.761-0.985), r: 0.901 for AA diameter and ICC: 0.853 (0.409-0.964), r: 0.744 for the AA area. The new automatic 3D-TOE software allows modelling and quantifying the aortic root from 3D-TOE data with high reproducibility. There is good correlation between the automated measurements and other 3D validated techniques. Our results support its use in clinical practice as an alternative to MDCT previous to TAVI. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.
Management of natural resources through automatic cartographic inventory
NASA Technical Reports Server (NTRS)
Rey, P.; Gourinard, Y.; Cambou, F. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Significant results of the ARNICA program from August 1972 - January 1973 have been: (1) establishment of image to object correspondence codes for all types of soil use and forestry in northern Spain; (2) establishment of a transfer procedure between qualitative (remote identification and remote interpretation) and quantitative (numerization, storage, automatic statistical cartography) use of images; (3) organization of microdensitometric data processing and automatic cartography software; and (4) development of a system for measuring reflectance simultaneous with imagery.
Tong, Feifei; Lian, Yan; Zhou, Huang; Shi, Xiaohong; He, Fengjiao
2014-10-21
A new multichannel series piezoelectric quartz crystal (MSPQC) cell sensor for real time monitoring of living cells in vitro was reported in this paper. The constructed sensor was used successfully to monitor adhesion, spreading, proliferation, and apoptosis of MG63 osteosarcoma cells and investigate the effects of different concentrations of cobalt chloride on MG63 cells. Quantitative real time and dynamic cell analyses data were conducted using the MSPQC cell sensor. Compared with methods such as fluorescence staining and morphology observation by microscopy, the MSPQC cell sensor is noninvasive, label free, simple, cheap, and capable of online monitoring. It can automatically record the growth status of cells and quantitatively evaluate cell proliferation and the apoptotic response to drugs. It will be a valuable detection and analysis tool for the acquisition of cellular level information and is anticipated to have application in the field of cell biology research or cytotoxicity testing in the future.
Rusz, J; Cmejla, R; Ruzickova, H; Ruzicka, E
2011-01-01
An assessment of vocal impairment is presented for separating healthy people from persons with early untreated Parkinson's disease (PD). This study's main purpose was to (a) determine whether voice and speech disorder are present from early stages of PD before starting dopaminergic pharmacotherapy, (b) ascertain the specific characteristics of the PD-related vocal impairment, (c) identify PD-related acoustic signatures for the major part of traditional clinically used measurement methods with respect to their automatic assessment, and (d) design new automatic measurement methods of articulation. The varied speech data were collected from 46 Czech native speakers, 23 with PD. Subsequently, 19 representative measurements were pre-selected, and Wald sequential analysis was then applied to assess the efficiency of each measure and the extent of vocal impairment of each subject. It was found that measurement of the fundamental frequency variations applied to two selected tasks was the best method for separating healthy from PD subjects. On the basis of objective acoustic measures, statistical decision-making theory, and validation from practicing speech therapists, it has been demonstrated that 78% of early untreated PD subjects indicate some form of vocal impairment. The speech defects thus uncovered differ individually in various characteristics including phonation, articulation, and prosody.
Automatic quantitative analysis of in-stent restenosis using FD-OCT in vivo intra-arterial imaging.
Mandelias, Kostas; Tsantis, Stavros; Spiliopoulos, Stavros; Katsakiori, Paraskevi F; Karnabatidis, Dimitris; Nikiforidis, George C; Kagadis, George C
2013-06-01
A new segmentation technique is implemented for automatic lumen area extraction and stent strut detection in intravascular optical coherence tomography (OCT) images for the purpose of quantitative analysis of in-stent restenosis (ISR). In addition, a user-friendly graphical user interface (GUI) is developed based on the employed algorithm toward clinical use. Four clinical datasets of frequency-domain OCT scans of the human femoral artery were analyzed. First, a segmentation method based on fuzzy C means (FCM) clustering and wavelet transform (WT) was applied toward inner luminal contour extraction. Subsequently, stent strut positions were detected by utilizing metrics derived from the local maxima of the wavelet transform into the FCM membership function. The inner lumen contour and the position of stent strut were extracted with high precision. Compared to manual segmentation by an expert physician, the automatic lumen contour delineation had an average overlap value of 0.917 ± 0.065 for all OCT images included in the study. The strut detection procedure achieved an overall accuracy of 93.80% and successfully identified 9.57 ± 0.5 struts for every OCT image. Processing time was confined to approximately 2.5 s per OCT frame. A new fast and robust automatic segmentation technique combining FCM and WT for lumen border extraction and strut detection in intravascular OCT images was designed and implemented. The proposed algorithm integrated in a GUI represents a step forward toward the employment of automated quantitative analysis of ISR in clinical practice.
An Instrumented Glove to Assess Manual Dexterity in Simulation-Based Neurosurgical Education
Lemos, Juan Diego; Hernandez, Alher Mauricio; Soto-Romero, Georges
2017-01-01
The traditional neurosurgical apprenticeship scheme includes the assessment of trainee’s manual skills carried out by experienced surgeons. However, the introduction of surgical simulation technology presents a new paradigm where residents can refine surgical techniques on a simulator before putting them into practice in real patients. Unfortunately, in this new scheme, an experienced surgeon will not always be available to evaluate trainee’s performance. For this reason, it is necessary to develop automatic mechanisms to estimate metrics for assessing manual dexterity in a quantitative way. Authors have proposed some hardware-software approaches to evaluate manual dexterity on surgical simulators. This paper presents IGlove, a wearable device that uses inertial sensors embedded on an elastic glove to capture hand movements. Metrics to assess manual dexterity are estimated from sensors signals using data processing and information analysis algorithms. It has been designed to be used with a neurosurgical simulator called Daubara NS Trainer, but can be easily adapted to another benchtop- and manikin-based medical simulators. The system was tested with a sample of 14 volunteers who performed a test that was designed to simultaneously evaluate their fine motor skills and the IGlove’s functionalities. Metrics obtained by each of the participants are presented as results in this work; it is also shown how these metrics are used to automatically evaluate the level of manual dexterity of each volunteer. PMID:28468268
Self-calibrating models for dynamic monitoring and diagnosis
NASA Technical Reports Server (NTRS)
Kuipers, Benjamin
1996-01-01
A method for automatically building qualitative and semi-quantitative models of dynamic systems, and using them for monitoring and fault diagnosis, is developed and demonstrated. The qualitative approach and semi-quantitative method are applied to monitoring observation streams, and to design of non-linear control systems.
Rapid and Reliable Damage Proxy Map from InSAR Coherence
NASA Technical Reports Server (NTRS)
Yun, Sang-Ho; Fielding, Eric; Simons, Mark; Agram, Piyush; Rosen, Paul; Owen, Susan; Webb, Frank
2012-01-01
Future radar satellites will visit SoCal within a day after a disaster event. Data acquisition latency in 2015-2020 is 8 to approx. 15 hours. Data transfer latency that often involves human/agency intervention far exceeds the data acquisition latency. Need interagency cooperation to establish automatic pipeline for data transfer. The algorithm is tested with ALOS PALSAR data of Pasadena, California. Quantitative quality assessment is being pursued: Meeting with Pasadena City Hall computer engineers for a complete list of demolition/construction project 1. Estimate the probability of detection and probability of false alarm 2. Estimate the optimal threshold value.
NASA Astrophysics Data System (ADS)
Soilán, M.; Riveiro, B.; Sánchez-Rodríguez, A.; González-deSantos, L. M.
2018-05-01
During the last few years, there has been a huge methodological development regarding the automatic processing of 3D point cloud data acquired by both terrestrial and aerial mobile mapping systems, motivated by the improvement of surveying technologies and hardware performance. This paper presents a methodology that, in a first place, extracts geometric and semantic information regarding the road markings within the surveyed area from Mobile Laser Scanning (MLS) data, and then employs it to isolate street areas where pedestrian crossings are found and, therefore, pedestrians are more likely to cross the road. Then, different safety-related features can be extracted in order to offer information about the adequacy of the pedestrian crossing regarding its safety, which can be displayed in a Geographical Information System (GIS) layer. These features are defined in four different processing modules: Accessibility analysis, traffic lights classification, traffic signs classification, and visibility analysis. The validation of the proposed methodology has been carried out in two different cities in the northwest of Spain, obtaining both quantitative and qualitative results for pedestrian crossing classification and for each processing module of the safety assessment on pedestrian crossing environments.
Li, Yajin; Chen, Litong; Yuan, Fusong; Li, Yugui; Zhou, Yongsheng; Sun, Yuchun
2017-01-01
This study is to quantitatively evaluate the efficacy of using a non-contact automatic articulating paper dispenser for reducing microbial articulating paper contamination. Articulating papers in four-handed mode, non-four-handed mode, and via an automatic articulating paper dispenser were evaluated. An adenosine triphosphate bioluminescence assay was used to quantitatively measure the relative light unit (RLU) values of the rest unused articulating papers in the same package to detect contamination at 4 time points, and triplicate examinations were performed for all three methods. The RLUs were recorded, compared, and evaluated. For four-handed mode (n = 36), the RLUs at the four time points were 2.44, 32.89, 37.89, and 27.22, with a satisfactory rate of 94%. The RLUs for non-four-handed mode (n = 36) were 2.22, 286.44, 299.44, and 493.56, with a satisfactory rate of 36%. The RLUs for using the automatic dispenser (n = 36) were all 0 with a satisfactory rate of 100%. The satisfactory rates were significantly different among three methods. No significant differences were observed in the satisfactory rates for the four time points samples. Contact by gloved hands can cause severe biological contamination of articulating paper. However, by using standard four-handed mode or a non-contact automatic articulating paper dispenser, contamination can be controlled. PMID:28466846
Computer-aided analysis with Image J for quantitatively assessing psoriatic lesion area.
Sun, Z; Wang, Y; Ji, S; Wang, K; Zhao, Y
2015-11-01
Body surface area is important in determining the severity of psoriasis. However, objective, reliable, and practical method is still in need for this purpose. We performed a computer image analysis (CIA) of psoriatic area using the image J freeware to determine whether this method could be used for objective evaluation of psoriatic area. Fifteen psoriasis patients were randomized to be treated with adalimumab or placebo in a clinical trial. At each visit, the psoriasis area of each body site was estimated by two physicians (E-method), and standard photographs were taken. The psoriasis area in the pictures was assessed with CIA using semi-automatic threshold selection (T-method), or manual selection (M-method, gold standard). The results assessed by the three methods were analyzed with reliability and affecting factors evaluated. Both T- and E-method correlated strongly with M-method, and T-method had a slightly stronger correlation with M-method. Both T- and E-methods had a good consistency between the evaluators. All the three methods were able to detect the change in the psoriatic area after treatment, while the E-method tends to overestimate. The CIA with image J freeware is reliable and practicable in quantitatively assessing the lesional of psoriasis area. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Xia, Ying; Fripp, Jurgen; Chandra, Shekhar S.; Walker, Duncan; Crozier, Stuart; Engstrom, Craig
2015-10-01
To develop an automated approach for 3D quantitative assessment and measurement of alpha angles from the femoral head-neck (FHN) junction using bone models derived from magnetic resonance (MR) images of the hip joint. Bilateral MR images of the hip joints were acquired from 30 male volunteers (healthy active individuals and high-performance athletes, aged 18-49 years) using a water-excited 3D dual echo steady state (DESS) sequence. In a subset of these subjects (18 water-polo players), additional True Fast Imaging with Steady-state Precession (TrueFISP) images were acquired from the right hip joint. For both MR image sets, an active shape model based algorithm was used to generate automated 3D bone reconstructions of the proximal femur. Subsequently, a local coordinate system of the femur was constructed to compute a 2D shape map to project femoral head sphericity for calculation of alpha angles around the FHN junction. To evaluate automated alpha angle measures, manual analyses were performed on anterosuperior and anterior radial MR slices from the FHN junction that were automatically reformatted using the constructed coordinate system. High intra- and inter-rater reliability (intra-class correlation coefficients > 0.95) was found for manual alpha angle measurements from the auto-extracted anterosuperior and anterior radial slices. Strong correlations were observed between manual and automatic measures of alpha angles for anterosuperior (r = 0.84) and anterior (r = 0.92) FHN positions. For matched DESS and TrueFISP images, there were no significant differences between automated alpha angle measures obtained from the upper anterior quadrant of the FHN junction (two-way repeated measures ANOVA, F < 0.01, p = 0.98). Our automatic 3D method analysed MR images of the hip joints to generate alpha angle measures around the FHN junction circumference with very good reliability and reproducibility. This work has the potential to improve analyses of cam-type lesions of the FHN junction for large-scale morphometric and clinical MR investigations of the human hip region.
NASA Astrophysics Data System (ADS)
Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong
2018-05-01
In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.
Investigation on microfluidic particles manipulation by holographic 3D tracking strategies
NASA Astrophysics Data System (ADS)
Cacace, Teresa; Paturzo, Melania; Memmolo, Pasquale; Vassalli, Massimo; Fraldi, Massimiliano; Mensitieri, Giuseppe; Ferraro, Pietro
2017-06-01
We demonstrate a 3D holographic tracking method to investigate particles motion in a microfluidic channel while unperturbed while inducing their migration through microfluidic manipulation. Digital holography (DH) in microscopy is a full-field, label-free imaging technique able to provide quantitative phase-contrast. The employed 3D tracking method is articulated in steps. First, the displacements along the optical axis are assessed by numerical refocusing criteria. In particular, an automatic refocusing method to recover the particles axial position is implemented employing a contrast-based refocusing criterion. Then, the transverse position of the in-focus object is evaluated through quantitative phase map segmentation methods and centroid-based 2D tracking strategy. The introduction of DH is thus suggested as a powerful approach for control of particles and biological samples manipulation, as well as a possible aid to precise design and implementation of advanced lab-on-chip microfluidic devices.
Mirsky, Simcha K; Barnea, Itay; Levi, Mattan; Greenspan, Hayit; Shaked, Natan T
2017-09-01
Currently, the delicate process of selecting sperm cells to be used for in vitro fertilization (IVF) is still based on the subjective, qualitative analysis of experienced clinicians using non-quantitative optical microscopy techniques. In this work, a method was developed for the automated analysis of sperm cells based on the quantitative phase maps acquired through use of interferometric phase microscopy (IPM). Over 1,400 human sperm cells from 8 donors were imaged using IPM, and an algorithm was designed to digitally isolate sperm cell heads from the quantitative phase maps while taking into consideration both the cell 3D morphology and contents, as well as acquire features describing sperm head morphology. A subset of these features was used to train a support vector machine (SVM) classifier to automatically classify sperm of good and bad morphology. The SVM achieves an area under the receiver operating characteristic curve of 88.59% and an area under the precision-recall curve of 88.67%, as well as precisions of 90% or higher. We believe that our automatic analysis can become the basis for objective and automatic sperm cell selection in IVF. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
A Quantitative Analysis of Cognitive Strategy Usage in the Marking of Two GCSE Examinations
ERIC Educational Resources Information Center
Suto, W. M. Irenka; Greatorex, Jackie
2008-01-01
Diverse strategies for marking GCSE examinations have been identified, ranging from simple automatic judgements to complex cognitive operations requiring considerable expertise. However, little is known about patterns of strategy usage or how such information could be utilised by examiners. We conducted a quantitative analysis of previous verbal…
Quantitative Analysis of TDLUs using Adaptive Morphological Shape Techniques
Rosebrock, Adrian; Caban, Jesus J.; Figueroa, Jonine; Gierach, Gretchen; Linville, Laura; Hewitt, Stephen; Sherman, Mark
2014-01-01
Within the complex branching system of the breast, terminal duct lobular units (TDLUs) are the anatomical location where most cancer originates. With aging, TDLUs undergo physiological involution, reflected in a loss of structural components (acini) and a reduction in total number. Data suggest that women undergoing benign breast biopsies that do not show age appropriate involution are at increased risk of developing breast cancer. To date, TDLU assessments have generally been made by qualitative visual assessment, rather than by objective quantitative analysis. This paper introduces a technique to automatically estimate a set of quantitative measurements and use those variables to more objectively describe and classify TDLUs. To validate the accuracy of our system, we compared the computer-based morphological properties of 51 TDLUs in breast tissues donated for research by volunteers in the Susan G. Komen Tissue Bank and compared results to those of a pathologist, demonstrating 70% agreement. Secondly, in order to show that our method is applicable to a wider range of datasets, we analyzed 52 TDLUs from biopsies performed for clinical indications in the National Cancer Institute’s Breast Radiology Evaluation and Study of Tissues (BREAST) Stamp Project and obtained 82% correlation with visual assessment. Lastly, we demonstrate the ability to uncover novel measures when researching the structural properties of the acini by applying machine learning and clustering techniques. Through our study we found that while the number of acini per TDLU increases exponentially with the TDLU diameter, the average elongation and roundness remain constant. PMID:25722829
Automatic Echographic Detection of Halloysite Clay Nanotubes in a Low Concentration Range.
Conversano, Francesco; Pisani, Paola; Casciaro, Ernesto; Di Paola, Marco; Leporatti, Stefano; Franchini, Roberto; Quarta, Alessandra; Gigli, Giuseppe; Casciaro, Sergio
2016-04-11
Aim of this work was to investigate the automatic echographic detection of an experimental drug delivery agent, halloysite clay nanotubes (HNTs), by employing an innovative method based on advanced spectral analysis of the corresponding "raw" radiofrequency backscatter signals. Different HNT concentrations in a low range (5.5-66 × 10 10 part/mL, equivalent to 0.25-3.00 mg/mL) were dispersed in custom-designed tissue-mimicking phantoms and imaged through a clinically-available echographic device at a conventional ultrasound diagnostic frequency (10 MHz). The most effective response (sensitivity = 60%, specificity = 95%), was found at a concentration of 33 × 10 10 part/mL (1.5 mg/mL), representing a kind of best compromise between the need of enough particles to introduce detectable spectral modifications in the backscattered signal and the necessity to avoid the losses of spectral peculiarity associated to higher HNT concentrations. Based on theoretical considerations and quantitative comparisons with literature-available results, this concentration could also represent an optimal concentration level for the automatic echographic detection of different solid nanoparticles when employing a similar ultrasound frequency. Future dedicated studies will assess the actual clinical usefulness of the proposed approach and the potential of HNTs for effective theranostic applications.
Forsberg, Daniel; Lindblom, Maria; Quick, Petter; Gauffin, Håkan
2016-09-01
To present a semi-automatic method with minimal user interaction for quantitative analysis of the patellofemoral motion pattern. 4D CT data capturing the patellofemoral motion pattern of a continuous flexion and extension were collected for five patients prone to patellar luxation both pre- and post-surgically. For the proposed method, an observer would place landmarks in a single 3D volume, which then are automatically propagated to the other volumes in a time sequence. From the landmarks in each volume, the measures patellar displacement, patellar tilt and angle between femur and tibia were computed. Evaluation of the observer variability showed the proposed semi-automatic method to be favorable over a fully manual counterpart, with an observer variability of approximately 1.5[Formula: see text] for the angle between femur and tibia, 1.5 mm for the patellar displacement, and 4.0[Formula: see text]-5.0[Formula: see text] for the patellar tilt. The proposed method showed that surgery reduced the patellar displacement and tilt at maximum extension with approximately 10-15 mm and 15[Formula: see text]-20[Formula: see text] for three patients but with less evident differences for two of the patients. A semi-automatic method suitable for quantification of the patellofemoral motion pattern as captured by 4D CT data has been presented. Its observer variability is on par with that of other methods but with the distinct advantage to support continuous motions during the image acquisition.
Automatic computation for optimum height planning of apartment buildings to improve solar access
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seong, Yoon-Bok; Kim, Yong-Yee; Seok, Ho-Tae
2011-01-15
The objective of this study is to suggest a mathematical model and an optimal algorithm for determining the height of apartment buildings to satisfy the solar rights of survey buildings or survey housing units. The objective is also to develop an automatic computation model for the optimum height of apartment buildings and then to clarify the performance and expected effects. To accomplish the objective of this study, the following procedures were followed: (1) The necessity of the height planning of obstruction buildings to satisfy the solar rights of survey buildings or survey housing units is demonstrated by analyzing through amore » literature review the recent trend of disputes related to solar rights and to examining the social requirements in terms of solar rights. In addition, the necessity of the automatic computation system for height planning of apartment buildings is demonstrated and a suitable analysis method for this system is chosen by investigating the characteristics of analysis methods for solar rights assessment. (2) A case study on the process of height planning of apartment buildings will be briefly described and the problems occurring in this process will then be examined carefully. (3) To develop an automatic computation model for height planning of apartment buildings, geometrical elements forming apartment buildings are defined by analyzing the geometrical characteristics of apartment buildings. In addition, design factors and regulations required in height planning of apartment buildings are investigated. Based on this knowledge, the methodology and mathematical algorithm to adjust the height of apartment buildings by automatic computation are suggested and probable problems and the ways to resolve these problems are discussed. Finally, the methodology and algorithm for the optimization are suggested. (4) Based on the suggested methodology and mathematical algorithm, the automatic computation model for optimum height of apartment buildings is developed and the developed system is verified through the application of some cases. The effects of the suggested model are then demonstrated quantitatively and qualitatively. (author)« less
Influenza antiviral therapeutics.
Mayburd, Anatoly L
2010-01-01
In this review we conducted a landscaping study of the therapeutic anti-influenza agents, limiting the scope by exclusion of vaccines. The resulting 2800 patent publications were classified into 23 distinct technological sectors. The mechanism of action, the promise and drawbacks of the corresponding technological sectors were explored on comparative basis. A set of quantitative parameters was defined based on landscaping procedure that appears to correlate with the practical success of a given class of therapeutics. Thus, the sectors not considered promising from the mechanistic side were also displaying low value of the classifying parameters. The parameters were combined into a probabilistic Marketing Prediction Score, assessing a likelihood of a given sector to produce a marketable product. The proposed analytical methodology may be useful for automatic search and assessment of technologies for the goals of acquisition, investment and competitive bidding. While not being a substitute for an expert evaluation, it provides an initial assessment suitable for implementation with large-scale automated landscaping.
NASA Astrophysics Data System (ADS)
Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A.; Chee, Kok Han; Liew, Yih Miin
2017-12-01
Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame.
[Medical assessments and measurements in ELSA-Brasil].
Mill, Jose Geraldo; Pinto, Karina; Griep, Rosane Härter; Goulart, Alessandra; Foppa, Murilo; Lotufo, Paulo A; Maestri, Marcelo K; Ribeiro, Antonio Luiz; Andreão, Rodrigo Varejão; Dantas, Eduardo Miranda; Oliveira, Ilka; Fuchs, Sandra C; Cunha, Roberto de Sá; Bensenor, Isabela M
2013-06-01
The article describes assessments and measurements performed in the Brazilian Longitudinal Study for Adult Health (ELSA-Brasil). Some assessments including anthropometric assessment, casual blood pressure measurement, and ankle-brachial index have an established clinical application while others including pulse wave velocity, heart rate variability, and carotid intima-media thickness have no established application and do not have reference values for healthy Brazilian population but may be important predictors of cardiovascular outcomes. Blood pressure measurement following postural change maneuver was included in the ELSA-Brasil because it has not been much tested in epidemiological studies. Innovative approaches were developed for assessing the ankle-brachial index using an automatic device instead of the mercury column to measure blood pressure and for assessing the anterior-posterior diameter of the right lobe of the liver by ultrasound for quantitative assessment of nonalcoholic fatty liver disease. All ELSA-Brasil subjects were younger (35 years or more) than those included in other cohorts studying subclinical atherosclerosis. The inclusion of younger individuals and a variety of assessments make the ELSA-Brasil a relevant epidemiology study nationwide and worldwide.
Cost and Efficacy Assessment of an Alternative Medication Compliance Urine Drug Testing Strategy.
Doyle, Kelly; Strathmann, Frederick G
2017-02-01
This study investigates the frequency at which quantitative results provide additional clinical benefit compared to qualitative results alone. A comparison between alternative urine drug screens and conventional screens including the assessment of cost-to-payer differences, accuracy of prescription compliance or polypharmacy/substance abuse was also included. In a reference laboratory evaluation of urine specimens from across the United States, 213 urine specimens with provided prescription medication information (302 prescriptions) were analyzed by two testing algorithms: 1) conventional immunoassay screen with subsequent reflexive testing of positive results by quantitative mass spectrometry; and 2) a combined immunoassay/qualitative mass-spectrometry screen that substantially reduced the need for subsequent testing. The qualitative screen was superior to immunoassay with reflex to mass spectrometry in confirming compliance per prescription (226/302 vs 205/302), and identifying non-prescription abuse (97 vs 71). Pharmaceutical impurities and inconsistent drug metabolite patterns were detected in only 3.8% of specimens, suggesting that quantitative results have limited benefit. The percentage difference between the conventional testing algorithm and the alternative screen was projected to be 55%, and a 2-year evaluation of test utilization as a measure of test order volume follows an exponential trend for alternative screen test orders over conventional immunoassay screens that require subsequent confirmation testing. Alternative, qualitative urine drug screens provide a less expensive, faster, and more comprehensive evaluation of patient medication compliance and drug abuse. The vast majority of results were interpretable with qualitative results alone indicating a reduced need to automatically reflex to quantitation or provide quantitation for the majority of patients. This strategy highlights a successful approach using an alternative strategy for both the laboratory and physician to align clinical needs while being mindful of costs.
Mahmoudzadeh, Amir Pasha; Kashou, Nasser H.
2013-01-01
Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method. PMID:24000283
Mahmoudzadeh, Amir Pasha; Kashou, Nasser H
2013-01-01
Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.
Edge Sharpness Assessment by Parametric Modeling: Application to Magnetic Resonance Imaging.
Ahmad, R; Ding, Y; Simonetti, O P
2015-05-01
In biomedical imaging, edge sharpness is an important yet often overlooked image quality metric. In this work, a semi-automatic method to quantify edge sharpness in the presence of significant noise is presented with application to magnetic resonance imaging (MRI). The method is based on parametric modeling of image edges. First, an edge map is automatically generated and one or more edges-of-interest (EOI) are manually selected using graphical user interface. Multiple exclusion criteria are then enforced to eliminate edge pixels that are potentially not suitable for sharpness assessment. Second, at each pixel of the EOI, an image intensity profile is read along a small line segment that runs locally normal to the EOI. Third, the profiles corresponding to all EOI pixels are individually fitted with a sigmoid function characterized by four parameters, including one that represents edge sharpness. Last, the distribution of the sharpness parameter is used to quantify edge sharpness. For validation, the method is applied to simulated data as well as MRI data from both phantom imaging and cine imaging experiments. This method allows for fast, quantitative evaluation of edge sharpness even in images with poor signal-to-noise ratio. Although the utility of this method is demonstrated for MRI, it can be adapted for other medical imaging applications.
Vitikainen, Anne-Mari; Mäkelä, Elina; Lioumis, Pantelis; Jousmäki, Veikko; Mäkelä, Jyrki P
2015-09-30
The use of navigated repetitive transcranial magnetic stimulation (rTMS) in mapping of speech-related brain areas has recently shown to be useful in preoperative workflow of epilepsy and tumor patients. However, substantial inter- and intraobserver variability and non-optimal replicability of the rTMS results have been reported, and a need for additional development of the methodology is recognized. In TMS motor cortex mappings the evoked responses can be quantitatively monitored by electromyographic recordings; however, no such easily available setup exists for speech mappings. We present an accelerometer-based setup for detection of vocalization-related larynx vibrations combined with an automatic routine for voice onset detection for rTMS speech mapping applying naming. The results produced by the automatic routine were compared with the manually reviewed video-recordings. The new method was applied in the routine navigated rTMS speech mapping for 12 consecutive patients during preoperative workup for epilepsy or tumor surgery. The automatic routine correctly detected 96% of the voice onsets, resulting in 96% sensitivity and 71% specificity. Majority (63%) of the misdetections were related to visible throat movements, extra voices before the response, or delayed naming of the previous stimuli. The no-response errors were correctly detected in 88% of events. The proposed setup for automatic detection of voice onsets provides quantitative additional data for analysis of the rTMS-induced speech response modifications. The objectively defined speech response latencies increase the repeatability, reliability and stratification of the rTMS results. Copyright © 2015 Elsevier B.V. All rights reserved.
Wang, Lei; Pedersen, Peder C; Strong, Diane M; Tulu, Bengisu; Agu, Emmanuel; Ignotz, Ron; He, Qian
2015-08-07
For individuals with type 2 diabetes, foot ulcers represent a significant health issue. The aim of this study is to design and evaluate a wound assessment system to help wound clinics assess patients with foot ulcers in a way that complements their current visual examination and manual measurements of their foot ulcers. The physical components of the system consist of an image capture box, a smartphone for wound image capture and a laptop for analyzing the wound image. The wound image assessment algorithms calculate the overall wound area, color segmented wound areas, and a healing score, to provide a quantitative assessment of the wound healing status both for a single wound image and comparisons of subsequent images to an initial wound image. The system was evaluated by assessing foot ulcers for 12 patients in the Wound Clinic at University of Massachusetts Medical School. As performance measures, the Matthews correlation coefficient (MCC) value for the wound area determination algorithm tested on 32 foot ulcer images was .68. The clinical validity of our healing score algorithm relative to the experienced clinicians was measured by Krippendorff's alpha coefficient (KAC) and ranged from .42 to .81. Our system provides a promising real-time method for wound assessment based on image analysis. Clinical comparisons indicate that the optimized mean-shift-based algorithm is well suited for wound area determination. Clinical evaluation of our healing score algorithm shows its potential to provide clinicians with a quantitative method for evaluating wound healing status. © 2015 Diabetes Technology Society.
Characterization of Intraventricular and Intracerebral Hematomas in Non-Contrast CT
Nowinski, Wieslaw L; Gomolka, Ryszard S; Qian, Guoyu; Gupta, Varsha; Ullman, Natalie L; Hanley, Daniel F
2014-01-01
Summary Characterization of hematomas is essential in scan reading, manual delineation, and designing automatic segmentation algorithms. Our purpose is to characterize the distribution of intraventricular (IVH) and intracerebral hematomas (ICH) in NCCT scans, study their relationship to gray matter (GM), and to introduce a new tool for quantitative hematoma delineation. We used 289 serial retrospective scans of 51 patients. Hematomas were manually delineated in a two-stage process. Hematoma contours generated in the first stage were quantified and enhanced in the second stage. Delineation was based on new quantitative rules and hematoma profiling, and assisted by a dedicated tool superimposing quantitative information on scans with 3D hematoma display. The tool provides: density maps (40-85HU), contrast maps (8/15HU), mean horizontal/vertical contrasts for hematoma contours, and hematoma contours below a specified mean contrast (8HU). White matter (WM) and GM were segmented automatically. IVH/ICH on serial NCCT is characterized by 59.0HU mean, 60.0HU median, 11.6HU standard deviation, 23.9HU mean contrast, –0.99HU/day slope, and –0.24 skewness (changing over time from negative to positive). Its 0.1st-99.9th percentile range corresponds to 25-88HU range. WM and GM are highly correlated (R 2=0.88; p<10–10) whereas the GM-GS correlation is weak (R 2=0.14; p<10–10). The intersection point of mean GM-hematoma density distributions is at 55.6±5.8HU with the corresponding GM/hematoma percentiles of 88th/40th. Objective characterization of IVH/ICH and stating the rules quantitatively will aid raters to delineate hematomas more robustly and facilitate designing algorithms for automatic hematoma segmentation. Our two-stage process is general and potentially applicable to delineate other pathologies on various modalities more robustly and quantitatively. PMID:24976197
Characterization of intraventricular and intracerebral hematomas in non-contrast CT.
Nowinski, Wieslaw L; Gomolka, Ryszard S; Qian, Guoyu; Gupta, Varsha; Ullman, Natalie L; Hanley, Daniel F
2014-06-01
Characterization of hematomas is essential in scan reading, manual delineation, and designing automatic segmentation algorithms. Our purpose is to characterize the distribution of intraventricular (IVH) and intracerebral hematomas (ICH) in NCCT scans, study their relationship to gray matter (GM), and to introduce a new tool for quantitative hematoma delineation. We used 289 serial retrospective scans of 51 patients. Hematomas were manually delineated in a two-stage process. Hematoma contours generated in the first stage were quantified and enhanced in the second stage. Delineation was based on new quantitative rules and hematoma profiling, and assisted by a dedicated tool superimposing quantitative information on scans with 3D hematoma display. The tool provides: density maps (40-85HU), contrast maps (8/15HU), mean horizontal/vertical contrasts for hematoma contours, and hematoma contours below a specified mean contrast (8HU). White matter (WM) and GM were segmented automatically. IVH/ICH on serial NCCT is characterized by 59.0HU mean, 60.0HU median, 11.6HU standard deviation, 23.9HU mean contrast, -0.99HU/day slope, and -0.24 skewness (changing over time from negative to positive). Its 0.1(st)-99.9(th) percentile range corresponds to 25-88HU range. WM and GM are highly correlated (R (2)=0.88; p<10(-10)) whereas the GM-GS correlation is weak (R (2)=0.14; p<10(-10)). The intersection point of mean GM-hematoma density distributions is at 55.6±5.8HU with the corresponding GM/hematoma percentiles of 88(th)/40(th). Objective characterization of IVH/ICH and stating the rules quantitatively will aid raters to delineate hematomas more robustly and facilitate designing algorithms for automatic hematoma segmentation. Our two-stage process is general and potentially applicable to delineate other pathologies on various modalities more robustly and quantitatively.
Meyer, C R; Boes, J L; Kim, B; Bland, P H; Zasadny, K R; Kison, P V; Koral, K; Frey, K A; Wahl, R L
1997-04-01
This paper applies and evaluates an automatic mutual information-based registration algorithm across a broad spectrum of multimodal volume data sets. The algorithm requires little or no pre-processing, minimal user input and easily implements either affine, i.e. linear or thin-plate spline (TPS) warped registrations. We have evaluated the algorithm in phantom studies as well as in selected cases where few other algorithms could perform as well, if at all, to demonstrate the value of this new method. Pairs of multimodal gray-scale volume data sets were registered by iteratively changing registration parameters to maximize mutual information. Quantitative registration errors were assessed in registrations of a thorax phantom using PET/CT and in the National Library of Medicine's Visible Male using MRI T2-/T1-weighted acquisitions. Registrations of diverse clinical data sets were demonstrated including rotate-translate mapping of PET/MRI brain scans with significant missing data, full affine mapping of thoracic PET/CT and rotate-translate mapping of abdominal SPECT/CT. A five-point thin-plate spline (TPS) warped registration of thoracic PET/CT is also demonstrated. The registration algorithm converged in times ranging between 3.5 and 31 min for affine clinical registrations and 57 min for TPS warping. Mean error vector lengths for rotate-translate registrations were measured to be subvoxel in phantoms. More importantly the rotate-translate algorithm performs well even with missing data. The demonstrated clinical fusions are qualitatively excellent at all levels. We conclude that such automatic, rapid, robust algorithms significantly increase the likelihood that multimodality registrations will be routinely used to aid clinical diagnoses and post-therapeutic assessment in the near future.
NASA Astrophysics Data System (ADS)
Yoon, S.; Won, M.; Jang, K.; Lim, J.
2016-12-01
As there has been a recent increase in the case of forest fires in North Korea descending southward through the De-Militarized Zone (DMZ), ensuring proper response to such events has been a challenge. Therefore, in order to respond and manage these forest fires appropriately, an improvement in the forest fire predictability through integration of mountain weather information observed at the most optimal site is necessary. This study is a proactive case in which a spatial analysis and an on-site assessment method were developed for selecting an optimum site for a mountain weather observation in national forest. For spatial analysis, the class 1 and 2 forest fire danger areas for the past 10 years, accessibility maximum 100m, Automatic Weather Station (AWS) redundancy within 2.5km, and mountain terrains higher than 200m were analyzed. A final overlay analysis was performed to select the candidates for the field assessment. The sites selected through spatial analysis were quantitatively evaluated based on the optimal meteorological environment, forest and hiking trail accessibility, AWS redundancy, and supply of wireless communication and solar powered electricity. The sites with total score of 70 and higher were accepted as adequate. At the final selected sites, an AMOS was established, and integration of mountain and Korea Meteorological Administration (KMA) weather data improved the forest fire predictability in South Korea by 10%. Given these study results, we expect that establishing an automatic mountain meteorology observation station at the optimal sites in inaccessible area and integrating mountain weather data will improve the predictability of forest fires.
Quantitative assessment of multiple sclerosis lesion load using CAD and expert input
NASA Astrophysics Data System (ADS)
Gertych, Arkadiusz; Wong, Alexis; Sangnil, Alan; Liu, Brent J.
2008-03-01
Multiple sclerosis (MS) is a frequently encountered neurological disease with a progressive but variable course affecting the central nervous system. Outline-based lesion quantification in the assessment of lesion load (LL) performed on magnetic resonance (MR) images is clinically useful and provides information about the development and change reflecting overall disease burden. Methods of LL assessment that rely on human input are tedious, have higher intra- and inter-observer variability and are more time-consuming than computerized automatic (CAD) techniques. At present it seems that methods based on human lesion identification preceded by non-interactive outlining by CAD are the best LL quantification strategies. We have developed a CAD that automatically quantifies MS lesions, displays 3-D lesion map and appends radiological findings to original images according to current DICOM standard. CAD is also capable to display and track changes and make comparison between patient's separate MRI studies to determine disease progression. The findings are exported to a separate imaging tool for review and final approval by expert. Capturing and standardized archiving of manual contours is also implemented. Similarity coefficients calculated from quantities of LL in collected exams show a good correlation of CAD-derived results vs. those incorporated as expert's reading. Combining the CAD approach with an expert interaction may impact to the diagnostic work-up of MS patients because of improved reproducibility in LL assessment and reduced time for single MR or comparative exams reading. Inclusion of CAD-generated outlines as DICOM-compliant overlays into the image data can serve as a better reference in MS progression tracking.
NASA Astrophysics Data System (ADS)
Giuliani, M.; Pianosi, F.; Castelletti, A.
2015-11-01
Advances in Environmental monitoring systems are making a wide range of data available at increasingly higher temporal and spatial resolution. This creates an opportunity to enhance real-time understanding of water systems conditions and to improve prediction of their future evolution, ultimately increasing our ability to make better decisions. Yet, many water systems are still operated using very simple information systems, typically based on simple statistical analysis and the operator's experience. In this work, we propose a framework to automatically select the most valuable information to inform water systems operations supported by quantitative metrics to operationally and economically assess the value of this information. The Hoa Binh reservoir in Vietnam is used to demonstrate the proposed framework in a multiobjective context, accounting for hydropower production and flood control. First, we quantify the expected value of perfect information, meaning the potential space for improvement under the assumption of exact knowledge of the future system conditions. Second, we automatically select the most valuable information that could be actually used to improve the Hoa Binh operations. Finally, we assess the economic value of sample information on the basis of the resulting policy performance. Results show that our framework successfully select information to enhance the performance of the operating policies with respect to both the competing objectives, attaining a 40% improvement close to the target trade-off selected as potentially good compromise between hydropower production and flood control.
Joshi, Vinayak; Agurto, Carla; VanNess, Richard; Nemeth, Sheila; Soliz, Peter; Barriga, Simon
2014-01-01
One of the most important signs of systemic disease that presents on the retina is vascular abnormalities such as in hypertensive retinopathy. Manual analysis of fundus images by human readers is qualitative and lacks in accuracy, consistency and repeatability. Present semi-automatic methods for vascular evaluation are reported to increase accuracy and reduce reader variability, but require extensive reader interaction; thus limiting the software-aided efficiency. Automation thus holds a twofold promise. First, decrease variability while increasing accuracy, and second, increasing the efficiency. In this paper we propose fully automated software as a second reader system for comprehensive assessment of retinal vasculature; which aids the readers in the quantitative characterization of vessel abnormalities in fundus images. This system provides the reader with objective measures of vascular morphology such as tortuosity, branching angles, as well as highlights of areas with abnormalities such as artery-venous nicking, copper and silver wiring, and retinal emboli; in order for the reader to make a final screening decision. To test the efficacy of our system, we evaluated the change in performance of a newly certified retinal reader when grading a set of 40 color fundus images with and without the assistance of the software. The results demonstrated an improvement in reader's performance with the software assistance, in terms of accuracy of detection of vessel abnormalities, determination of retinopathy, and reading time. This system enables the reader in making computer-assisted vasculature assessment with high accuracy and consistency, at a reduced reading time.
Lisowska, A; Knapp, M; Tycinska, A; Sawicki, R; Kralisz, P; Lisowski, P; Sobkowicz, B; Musial, W I
2014-02-01
Contrast-enhanced ultrasound imaging of the carotid arteries (CECU) permits direct, real-time visualization of neovascularization in atherosclerotic plaques and is a confirmed predictor of unstable atheromatous lesions. The aim of the study was the assessment of a new, automatically measured index of intensity in quantitative estimation of the contrast flow through the carotid plaque (till now assessed only visually). Forty-four patients (mean age 70.4±11.4) with ultrasound diagnosed significant stenosis of internal carotid artery (ICA), after cerebrovascular or cardiovascular events, qualified for carotid artery stenting (CAS) were examined. The carotid ultrasound examinations with contrast agent Sonovue were performed. Visually in 22 patients (50%) contrast flow through the atherosclerotic plaques was found. In 17 patients (38.6%) massive, calcified atherosclerotic plaques were present. Patients with preserved contrast flow through the plaque more frequently had a history of cerebral stroke (P=0.04). Massive calcifications of atherosclerotic plaques correlated with a previous MI (P=0.03) and the degree of advancement of coronary artery disease (P=0.04), but not with a previous cerebral stroke. Contrast flow through the atherosclerotic plaque positively correlated with values of the index of intensity (r=0.69, P<0.00001). In patients with preserved contrast flow the mean value of the index of intensity was 22.24±3.55 dB as compared with 12.37±7.67 dB - a value present in patients without preserved contrast flow. No significant relation for the degree of calcifications and the value of the index of intensity was found. The assessment of the index of intensity is a novel, simple and automatic method to estimate the degree of contrast flow through the carotid plaque. The values of the index of intensity correlate with the contrast flow through the atherosclerotic plaque, but not with its calcification.
Automatically Scoring Short Essays for Content. CRESST Report 836
ERIC Educational Resources Information Center
Kerr, Deirdre; Mousavi, Hamid; Iseli, Markus R.
2013-01-01
The Common Core assessments emphasize short essay constructed response items over multiple choice items because they are more precise measures of understanding. However, such items are too costly and time consuming to be used in national assessments unless a way is found to score them automatically. Current automatic essay scoring techniques are…
Automatic Coding of Short Text Responses via Clustering in Educational Assessment
ERIC Educational Resources Information Center
Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank
2016-01-01
Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, S; Wu, Y; Chang, X
Purpose: A novel computer software system, namely APDV (Automatic Pre-Delivery Verification), has been developed for verifying patient treatment plan parameters right prior to treatment deliveries in order to automatically detect and prevent catastrophic errors. Methods: APDV is designed to continuously monitor new DICOM plan files on the TMS computer at the treatment console. When new plans to be delivered are detected, APDV checks the consistencies of plan parameters and high-level plan statistics using underlying rules and statistical properties based on given treatment site, technique and modality. These rules were quantitatively derived by retrospectively analyzing all the EBRT treatment plans ofmore » the past 8 years at authors’ institution. Therapists and physicists will be notified with a warning message displayed on the TMS computer if any critical errors are detected, and check results, confirmation, together with dismissal actions will be saved into database for further review. Results: APDV was implemented as a stand-alone program using C# to ensure required real time performance. Mean values and standard deviations were quantitatively derived for various plan parameters including MLC usage, MU/cGy radio, beam SSD, beam weighting, and the beam gantry angles (only for lateral targets) per treatment site, technique and modality. 2D-based rules of combined MU/cGy ratio and averaged SSD values were also derived using joint probabilities of confidence error ellipses. The statistics of these major treatment plan parameters quantitatively evaluate the consistency of any treatment plans which facilitates automatic APDV checking procedures. Conclusion: APDV could be useful in detecting and preventing catastrophic errors immediately before treatment deliveries. Future plan including automatic patient identify and patient setup checks after patient daily images are acquired by the machine and become available on the TMS computer. This project is supported by the Agency for Healthcare Research and Quality (AHRQ) under award 1R01HS0222888. The senior author received research grants from ViewRay Inc. and Varian Medical System.« less
Software Cost Measuring and Reporting. One of the Software Acquisition Engineering Guidebook Series.
1979-01-02
through the peripherals. How- and performance criteria), ever, his interaction is usually minimal since, by difinition , the automatic test Since TS...performs its Software estimating is still heavily intended functions properly. dependent on experienced judgement. However, quantitative methods...apply to systems of totally different can be distributed to specialists who content. The Quantitative guideline may are most familiar with the work. One
An Online Risk Monitor System (ORMS) to Increase Safety and Security Levels in Industry
NASA Astrophysics Data System (ADS)
Zubair, M.; Rahman, Khalil Ur; Hassan, Mehmood Ul
2013-12-01
The main idea of this research is to develop an Online Risk Monitor System (ORMS) based on Living Probabilistic Safety Assessment (LPSA). The article highlights the essential features and functions of ORMS. The basic models and modules such as, Reliability Data Update Model (RDUM), running time update, redundant system unavailability update, Engineered Safety Features (ESF) unavailability update and general system update have been described in this study. ORMS not only provides quantitative analysis but also highlights qualitative aspects of risk measures. ORMS is capable of automatically updating the online risk models and reliability parameters of equipment. ORMS can support in the decision making process of operators and managers in Nuclear Power Plants.
Hagopian, Louis P.; Rooker, Griffin W.; Zarcone, Jennifer R.; Bonner, Andrew C.; Arevalo, Alexander R.
2017-01-01
Hagopian, Rooker, and Zarcone (2015) evaluated a model for subtyping automatically reinforced self-injurious behavior (SIB) based on its sensitivity to changes in functional analysis conditions and the presence of self-restraint. The current study tested the generality of the model by applying it to all datasets of automatically reinforced SIB published from 1982 to 2015. We identified 49 datasets that included sufficient data to permit subtyping. Similar to the original study, Subtype-1 SIB was generally amenable to treatment using reinforcement alone, whereas Subtype-2 SIB was not. Conclusions could not be drawn about Subtype-3 SIB due to the small number of datasets. Nevertheless, the findings support the generality of the model and suggest that sensitivity of SIB to disruption by alternative reinforcement is an important dimension of automatically reinforced SIB. Findings also suggest that automatically reinforced SIB should no longer be considered a single category and that additional research is needed to better understand and treat Subtype-2 SIB. PMID:28032344
NASA Astrophysics Data System (ADS)
Lazariev, A.; Allouche, A.-R.; Aubert-Frécon, M.; Fauvelle, F.; Piotto, M.; Elbayed, K.; Namer, I.-J.; van Ormondt, D.; Graveron-Demilly, D.
2011-11-01
High-resolution magic angle spinning (HRMAS) nuclear magnetic resonance (NMR) is playing an increasingly important role for diagnosis. This technique enables setting up metabolite profiles of ex vivo pathological and healthy tissue. The need to monitor diseases and pharmaceutical follow-up requires an automatic quantitation of HRMAS 1H signals. However, for several metabolites, the values of chemical shifts of proton groups may slightly differ according to the micro-environment in the tissue or cells, in particular to its pH. This hampers the accurate estimation of the metabolite concentrations mainly when using quantitation algorithms based on a metabolite basis set: the metabolite fingerprints are not correct anymore. In this work, we propose an accurate method coupling quantum mechanical simulations and quantitation algorithms to handle basis-set changes. The proposed algorithm automatically corrects mismatches between the signals of the simulated basis set and the signal under analysis by maximizing the normalized cross-correlation between the mentioned signals. Optimized chemical shift values of the metabolites are obtained. This method, QM-QUEST, provides more robust fitting while limiting user involvement and respects the correct fingerprints of metabolites. Its efficiency is demonstrated by accurately quantitating 33 signals from tissue samples of human brains with oligodendroglioma, obtained at 11.7 tesla. The corresponding chemical shift changes of several metabolites within the series are also analyzed.
Mimicry and automatic imitation are not correlated
van Den Bossche, Sofie; Cracco, Emiel; Bardi, Lara; Rigoni, Davide; Brass, Marcel
2017-01-01
It is widely known that individuals have a tendency to imitate each other. However, different psychological disciplines assess imitation in different manners. While social psychologists assess mimicry by means of action observation, cognitive psychologists assess automatic imitation with reaction time based measures on a trial-by-trial basis. Although these methods differ in crucial methodological aspects, both phenomena are assumed to rely on similar underlying mechanisms. This raises the fundamental question whether mimicry and automatic imitation are actually correlated. In the present research we assessed both phenomena and did not find a meaningful correlation. Moreover, personality traits such as empathy, autism traits, and traits related to self- versus other-focus did not correlate with mimicry or automatic imitation either. Theoretical implications are discussed. PMID:28877197
NASA Technical Reports Server (NTRS)
Wilckens, V.
1972-01-01
Present information display concepts for pilot landing guidance are outlined considering manual control as well as substitution of man by fully competent automatics. Display improvements are achieved by compressing the distributed indicators into an accumulative display and thus reducing information scanning. Complete integration of quantitative indications, outer loop information, and real world display in a pictorial information channel geometry constitutes an interface with human ability to differentiate and integrate for optimal manual control of the aircraft.
Shima, Yoichiro; Suwa, Akina; Gomi, Yuichiro; Nogawa, Hiroki; Nagata, Hiroshi; Tanaka, Hiroshi
2007-01-01
Real-time video pictures can be transmitted inexpensively via a broadband connection using the DVTS (digital video transport system). However, the degradation of video pictures transmitted by DVTS has not been sufficiently evaluated. We examined the application of DVTS to remote consultation by using images of laparoscopic and endoscopic surgeries. A subjective assessment by the double stimulus continuous quality scale (DSCQS) method of the transmitted video pictures was carried out by eight doctors. Three of the four video recordings were assessed as being transmitted with no degradation in quality. None of the doctors noticed any degradation in the images due to encryption by the VPN (virtual private network) system. We also used an automatic picture quality assessment system to make an objective assessment of the same images. The objective DSCQS values were similar to the subjective ones. We conclude that although the quality of video pictures transmitted by the DVTS was slightly reduced, they were useful for clinical purposes. Encryption with a VPN did not degrade image quality.
Automatic colorimetric calibration of human wounds
2010-01-01
Background Recently, digital photography in medicine is considered an acceptable tool in many clinical domains, e.g. wound care. Although ever higher resolutions are available, reproducibility is still poor and visual comparison of images remains difficult. This is even more the case for measurements performed on such images (colour, area, etc.). This problem is often neglected and images are freely compared and exchanged without further thought. Methods The first experiment checked whether camera settings or lighting conditions could negatively affect the quality of colorimetric calibration. Digital images plus a calibration chart were exposed to a variety of conditions. Precision and accuracy of colours after calibration were quantitatively assessed with a probability distribution for perceptual colour differences (dE_ab). The second experiment was designed to assess the impact of the automatic calibration procedure (i.e. chart detection) on real-world measurements. 40 Different images of real wounds were acquired and a region of interest was selected in each image. 3 Rotated versions of each image were automatically calibrated and colour differences were calculated. Results 1st Experiment: Colour differences between the measurements and real spectrophotometric measurements reveal median dE_ab values respectively 6.40 for the proper patches of calibrated normal images and 17.75 for uncalibrated images demonstrating an important improvement in accuracy after calibration. The reproducibility, visualized by the probability distribution of the dE_ab errors between 2 measurements of the patches of the images has a median of 3.43 dE* for all calibrated images, 23.26 dE_ab for all uncalibrated images. If we restrict ourselves to the proper patches of normal calibrated images the median is only 2.58 dE_ab! Wilcoxon sum-rank testing (p < 0.05) between uncalibrated normal images and calibrated normal images with proper squares were equal to 0 demonstrating a highly significant improvement of reproducibility. In the second experiment, the reproducibility of the chart detection during automatic calibration is presented using a probability distribution of dE_ab errors between 2 measurements of the same ROI. Conclusion The investigators proposed an automatic colour calibration algorithm that ensures reproducible colour content of digital images. Evidence was provided that images taken with commercially available digital cameras can be calibrated independently of any camera settings and illumination features. PMID:20298541
Automatic detection and quantitative analysis of cells in the mouse primary motor cortex
NASA Astrophysics Data System (ADS)
Meng, Yunlong; He, Yong; Wu, Jingpeng; Chen, Shangbin; Li, Anan; Gong, Hui
2014-09-01
Neuronal cells play very important role on metabolism regulation and mechanism control, so cell number is a fundamental determinant of brain function. Combined suitable cell-labeling approaches with recently proposed three-dimensional optical imaging techniques, whole mouse brain coronal sections can be acquired with 1-μm voxel resolution. We have developed a completely automatic pipeline to perform cell centroids detection, and provided three-dimensional quantitative information of cells in the primary motor cortex of C57BL/6 mouse. It involves four principal steps: i) preprocessing; ii) image binarization; iii) cell centroids extraction and contour segmentation; iv) laminar density estimation. Investigations on the presented method reveal promising detection accuracy in terms of recall and precision, with average recall rate 92.1% and average precision rate 86.2%. We also analyze laminar density distribution of cells from pial surface to corpus callosum from the output vectorizations of detected cell centroids in mouse primary motor cortex, and find significant cellular density distribution variations in different layers. This automatic cell centroids detection approach will be beneficial for fast cell-counting and accurate density estimation, as time-consuming and error-prone manual identification is avoided.
NASA Astrophysics Data System (ADS)
Yu, You; Han, Yanchao; Xu, Miao; Zhang, Lingling; Dong, Shaojun
2016-04-01
Inverted illumination compensation is important in energy-saving projects, artificial photosynthesis and some forms of agriculture, such as hydroponics. However, only a few illumination adjustments based on self-powered biodetectors that quantitatively detect the intensity of visible light have been reported. We constructed an automatic illumination compensation device based on a photoelectrochemical biofuel cell (PBFC) driven by visible light. The PBFC consisted of a glucose dehydrogenase modified bioanode and a p-type semiconductor cuprous oxide photocathode. The PBFC had a high power output of 161.4 μW cm-2 and an open circuit potential that responded rapidly to visible light. It adjusted the amount of illumination inversely irrespective of how the external illumination was changed. This rational design of utilizing PBFCs provides new insights into automatic light adjustable devices and may be of benefit to intelligent applications.Inverted illumination compensation is important in energy-saving projects, artificial photosynthesis and some forms of agriculture, such as hydroponics. However, only a few illumination adjustments based on self-powered biodetectors that quantitatively detect the intensity of visible light have been reported. We constructed an automatic illumination compensation device based on a photoelectrochemical biofuel cell (PBFC) driven by visible light. The PBFC consisted of a glucose dehydrogenase modified bioanode and a p-type semiconductor cuprous oxide photocathode. The PBFC had a high power output of 161.4 μW cm-2 and an open circuit potential that responded rapidly to visible light. It adjusted the amount of illumination inversely irrespective of how the external illumination was changed. This rational design of utilizing PBFCs provides new insights into automatic light adjustable devices and may be of benefit to intelligent applications. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00759g
Zayas, Vivian; Shoda, Yuichi
2005-08-01
Three studies tested the expectation that automatic reactions elicited by the mental representation of one's current romantic partner, mother, and self relate to adult romantic attachment. Adult romantic attachment was assessed using multiple measures, and individual differences in automatic reactions were assessed by the Implicit Association Test (IAT). Studies 1 and 2 showed that automatic reactions elicited by thoughts of current romantic partner, but not by thoughts of self, were related to adult romantic attachment assessed at a specific (i.e., within one's current romantic relationship) and general level (i.e., across all romantic relationships). The pattern of results was stronger among individuals identified as attachment-schematic. Studies 2 and 3 showed that automatic reactions elicited by thoughts of one's mother were related to adult romantic attachment assessed at a general level. In all three studies, results did not differ depending on how adult romantic attachment was conceptualized (four styles vs. two dimensions).
Lv, Peijie; Liu, Jie; Zhang, Rui; Jia, Yan
2015-01-01
Objective To assess the lesion conspicuity and image quality in CT evaluation of small (≤ 3 cm) hepatocellular carcinomas (HCCs) using automatic tube voltage selection (ATVS) and automatic tube current modulation (ATCM) with or without iterative reconstruction. Materials and Methods One hundred and five patients with 123 HCC lesions were included. Fifty-seven patients were scanned using both ATVS and ATCM and images were reconstructed using either filtered back-projection (FBP) (group A1) or sinogram-affirmed iterative reconstruction (SAFIRE) (group A2). Forty-eight patients were imaged using only ATCM, with a fixed tube potential of 120 kVp and FBP reconstruction (group B). Quantitative parameters (image noise in Hounsfield unit and contrast-to-noise ratio of the aorta, the liver, and the hepatic tumors) and qualitative visual parameters (image noise, overall image quality, and lesion conspicuity as graded on a 5-point scale) were compared among the groups. Results Group A2 scanned with the automatically chosen 80 kVp and 100 kVp tube voltages ranked the best in lesion conspicuity and subjective and objective image quality (p values ranging from < 0.001 to 0.004) among the three groups, except for overall image quality between group A2 and group B (p = 0.022). Group A1 showed higher image noise (p = 0.005) but similar lesion conspicuity and overall image quality as compared with group B. The radiation dose in group A was 19% lower than that in group B (p = 0.022). Conclusion CT scanning with combined use of ATVS and ATCM and image reconstruction with SAFIRE algorithm provides higher lesion conspicuity and better image quality for evaluating small hepatic HCCs with radiation dose reduction. PMID:25995682
Yang Li; Wei Liang; Yinlong Zhang; Haibo An; Jindong Tan
2016-08-01
Automatic and accurate lumbar vertebrae detection is an essential step of image-guided minimally invasive spine surgery (IG-MISS). However, traditional methods still require human intervention due to the similarity of vertebrae, abnormal pathological conditions and uncertain imaging angle. In this paper, we present a novel convolutional neural network (CNN) model to automatically detect lumbar vertebrae for C-arm X-ray images. Training data is augmented by DRR and automatic segmentation of ROI is able to reduce the computational complexity. Furthermore, a feature fusion deep learning (FFDL) model is introduced to combine two types of features of lumbar vertebrae X-ray images, which uses sobel kernel and Gabor kernel to obtain the contour and texture of lumbar vertebrae, respectively. Comprehensive qualitative and quantitative experiments demonstrate that our proposed model performs more accurate in abnormal cases with pathologies and surgical implants in multi-angle views.
Iyer, Janani; Wang, Qingyu; Le, Thanh; Pizzo, Lucilla; Grönke, Sebastian; Ambegaokar, Surendra S.; Imai, Yuzuru; Srivastava, Ashutosh; Troisí, Beatriz Llamusí; Mardon, Graeme; Artero, Ruben; Jackson, George R.; Isaacs, Adrian M.; Partridge, Linda; Lu, Bingwei; Kumar, Justin P.; Girirajan, Santhosh
2016-01-01
About two-thirds of the vital genes in the Drosophila genome are involved in eye development, making the fly eye an excellent genetic system to study cellular function and development, neurodevelopment/degeneration, and complex diseases such as cancer and diabetes. We developed a novel computational method, implemented as Flynotyper software (http://flynotyper.sourceforge.net), to quantitatively assess the morphological defects in the Drosophila eye resulting from genetic alterations affecting basic cellular and developmental processes. Flynotyper utilizes a series of image processing operations to automatically detect the fly eye and the individual ommatidium, and calculates a phenotypic score as a measure of the disorderliness of ommatidial arrangement in the fly eye. As a proof of principle, we tested our method by analyzing the defects due to eye-specific knockdown of Drosophila orthologs of 12 neurodevelopmental genes to accurately document differential sensitivities of these genes to dosage alteration. We also evaluated eye images from six independent studies assessing the effect of overexpression of repeats, candidates from peptide library screens, and modifiers of neurotoxicity and developmental processes on eye morphology, and show strong concordance with the original assessment. We further demonstrate the utility of this method by analyzing 16 modifiers of sine oculis obtained from two genome-wide deficiency screens of Drosophila and accurately quantifying the effect of its enhancers and suppressors during eye development. Our method will complement existing assays for eye phenotypes, and increase the accuracy of studies that use fly eyes for functional evaluation of genes and genetic interactions. PMID:26994292
Automatic estimation of extent of resection and residual tumor volume of patients with glioblastoma.
Meier, Raphael; Porz, Nicole; Knecht, Urspeter; Loosli, Tina; Schucht, Philippe; Beck, Jürgen; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio
2017-10-01
OBJECTIVE In the treatment of glioblastoma, residual tumor burden is the only prognostic factor that can be actively influenced by therapy. Therefore, an accurate, reproducible, and objective measurement of residual tumor burden is necessary. This study aimed to evaluate the use of a fully automatic segmentation method-brain tumor image analysis (BraTumIA)-for estimating the extent of resection (EOR) and residual tumor volume (RTV) of contrast-enhancing tumor after surgery. METHODS The imaging data of 19 patients who underwent primary resection of histologically confirmed supratentorial glioblastoma were retrospectively reviewed. Contrast-enhancing tumors apparent on structural preoperative and immediate postoperative MR imaging in this patient cohort were segmented by 4 different raters and the automatic segmentation BraTumIA software. The manual and automatic results were quantitatively compared. RESULTS First, the interrater variabilities in the estimates of EOR and RTV were assessed for all human raters. Interrater agreement in terms of the coefficient of concordance (W) was higher for RTV (W = 0.812; p < 0.001) than for EOR (W = 0.775; p < 0.001). Second, the volumetric estimates of BraTumIA for all 19 patients were compared with the estimates of the human raters, which showed that for both EOR (W = 0.713; p < 0.001) and RTV (W = 0.693; p < 0.001) the estimates of BraTumIA were generally located close to or between the estimates of the human raters. No statistically significant differences were detected between the manual and automatic estimates. BraTumIA showed a tendency to overestimate contrast-enhancing tumors, leading to moderate agreement with expert raters with respect to the literature-based, survival-relevant threshold values for EOR. CONCLUSIONS BraTumIA can generate volumetric estimates of EOR and RTV, in a fully automatic fashion, which are comparable to the estimates of human experts. However, automated analysis showed a tendency to overestimate the volume of a contrast-enhancing tumor, whereas manual analysis is prone to subjectivity, thereby causing considerable interrater variability.
Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A; Chee, Kok Han; Liew, Yih Miin
2017-12-01
Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Artificial intelligence in radiology.
Hosny, Ahmed; Parmar, Chintan; Quackenbush, John; Schwartz, Lawrence H; Aerts, Hugo J W L
2018-05-17
Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.
Automated volumetric evaluation of stereoscopic disc photography
Xu, Juan; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A; Kagemann, Larry; Craig, Jamie E; Mackey, David A; Hewitt, Alex W; Schuman, Joel S
2010-01-01
PURPOSE: To develop a fully automated algorithm (AP) to perform a volumetric measure of the optic disc using conventional stereoscopic optic nerve head (ONH) photographs, and to compare algorithm-produced parameters with manual photogrammetry (MP), scanning laser ophthalmoscope (SLO) and optical coherence tomography (OCT) measurements. METHODS: One hundred twenty-two stereoscopic optic disc photographs (61 subjects) were analyzed. Disc area, rim area, cup area, cup/disc area ratio, vertical cup/disc ratio, rim volume and cup volume were automatically computed by the algorithm. Latent variable measurement error models were used to assess measurement reproducibility for the four techniques. RESULTS: AP had better reproducibility for disc area and cup volume and worse reproducibility for cup/disc area ratio and vertical cup/disc ratio, when the measurements were compared to the MP, SLO and OCT methods. CONCLUSION: AP provides a useful technique for an objective quantitative assessment of 3D ONH structures. PMID:20588996
Panetta, Daniele; Pelosi, Gualtiero; Viglione, Federica; Kusmic, Claudia; Terreni, Marianna; Belcari, Nicola; Guerra, Alberto Del; Athanasiou, Lambros; Exarchos, Themistoklis; Fotiadis, Dimitrios I; Filipovic, Nenad; Trivella, Maria Giovanna; Salvadori, Piero A; Parodi, Oberdan
2015-01-01
Micro-CT is an established imaging technique for high-resolution non-destructive assessment of vascular samples, which is gaining growing interest for investigations of atherosclerotic arteries both in humans and in animal models. However, there is still a lack in the definition of micro-CT image metrics suitable for comprehensive evaluation and quantification of features of interest in the field of experimental atherosclerosis (ATS). A novel approach to micro-CT image processing for profiling of coronary ATS is described, providing comprehensive visualization and quantification of contrast agent-free 3D high-resolution reconstruction of full-length artery walls. Accelerated coronary ATS has been induced by high fat cholesterol-enriched diet in swine and left coronary artery (LCA) harvested en bloc for micro-CT scanning and histologic processing. A cylindrical coordinate system has been defined on the image space after curved multiplanar reformation of the coronary vessel for the comprehensive visualization of the main vessel features such as wall thickening and calcium content. A novel semi-automatic segmentation procedure based on 2D histograms has been implemented and the quantitative results validated by histology. The potentiality of attenuation-based micro-CT at low kV to reliably separate arterial wall layers from adjacent tissue as well as identify wall and plaque contours and major tissue components has been validated by histology. Morphometric indexes from histological data corresponding to several micro-CT slices have been derived (double observer evaluation at different coronary ATS stages) and highly significant correlations (R2 > 0.90) evidenced. Semi-automatic morphometry has been validated by double observer manual morphometry of micro-CT slices and highly significant correlations were found (R2 > 0.92). The micro-CT methodology described represents a handy and reliable tool for quantitative high resolution and contrast agent free full length coronary wall profiling, able to assist atherosclerotic vessels morphometry in a preclinical experimental model of coronary ATS and providing a link between in vivo imaging and histology.
Im, K; Guimaraes, A; Kim, Y; Cottrill, E; Gagoski, B; Rollins, C; Ortinau, C; Yang, E; Grant, P E
2017-07-01
Aberrant gyral folding is a key feature in the diagnosis of many cerebral malformations. However, in fetal life, it is particularly challenging to confidently diagnose aberrant folding because of the rapid spatiotemporal changes of gyral development. Currently, there is no resource to measure how an individual fetal brain compares with normal spatiotemporal variations. In this study, we assessed the potential for automatic analysis of early sulcal patterns to detect individual fetal brains with cerebral abnormalities. Triplane MR images were aligned to create a motion-corrected volume for each individual fetal brain, and cortical plate surfaces were extracted. Sulcal basins were automatically identified on the cortical plate surface and compared with a combined set generated from 9 normal fetal brain templates. Sulcal pattern similarities to the templates were quantified by using multivariate geometric features and intersulcal relationships for 14 normal fetal brains and 5 fetal brains that were proved to be abnormal on postnatal MR imaging. Results were compared with the gyrification index. Significantly reduced sulcal pattern similarities to normal templates were found in all abnormal individual fetuses compared with normal fetuses (mean similarity [normal, abnormal], left: 0.818, 0.752; P < .001; right: 0.810, 0.753; P < .01). Altered location and depth patterns of sulcal basins were the primary distinguishing features. The gyrification index was not significantly different between the normal and abnormal groups. Automated analysis of interrelated patterning of early primary sulci could outperform the traditional gyrification index and has the potential to quantitatively detect individual fetuses with emerging abnormal sulcal patterns. © 2017 by American Journal of Neuroradiology.
Analysis of automated quantification of motor activity in REM sleep behaviour disorder.
Frandsen, Rune; Nikolic, Miki; Zoetmulder, Marielle; Kempfner, Lykke; Jennum, Poul
2015-10-01
Rapid eye movement (REM) sleep behaviour disorder (RBD) is characterized by dream enactment and REM sleep without atonia. Atonia is evaluated on the basis of visual criteria, but there is a need for more objective, quantitative measurements. We aimed to define and optimize a method for establishing baseline and all other parameters in automatic quantifying submental motor activity during REM sleep. We analysed the electromyographic activity of the submental muscle in polysomnographs of 29 patients with idiopathic RBD (iRBD), 29 controls and 43 Parkinson's (PD) patients. Six adjustable parameters for motor activity were defined. Motor activity was detected and quantified automatically. The optimal parameters for separating RBD patients from controls were investigated by identifying the greatest area under the receiver operating curve from a total of 648 possible combinations. The optimal parameters were validated on PD patients. Automatic baseline estimation improved characterization of atonia during REM sleep, as it eliminates inter/intra-observer variability and can be standardized across diagnostic centres. We found an optimized method for quantifying motor activity during REM sleep. The method was stable and can be used to differentiate RBD from controls and to quantify motor activity during REM sleep in patients with neurodegeneration. No control had more than 30% of REM sleep with increased motor activity; patients with known RBD had as low activity as 4.5%. We developed and applied a sensitive, quantitative, automatic algorithm to evaluate loss of atonia in RBD patients. © 2015 European Sleep Research Society.
Hautvast, Gilion L T F; Salton, Carol J; Chuang, Michael L; Breeuwer, Marcel; O'Donnell, Christopher J; Manning, Warren J
2012-05-01
Quantitative analysis of short-axis functional cardiac magnetic resonance images can be performed using automatic contour detection methods. The resulting myocardial contours must be reviewed and possibly corrected, which can be time-consuming, particularly when performed across all cardiac phases. We quantified the impact of manual contour corrections on both analysis time and quantitative measurements obtained from left ventricular short-axis cine images acquired from 1555 participants of the Framingham Heart Study Offspring cohort using computer-aided contour detection methods. The total analysis time for a single case was 7.6 ± 1.7 min for an average of 221 ± 36 myocardial contours per participant. This included 4.8 ± 1.6 min for manual contour correction of 2% of all automatically detected endocardial contours and 8% of all automatically detected epicardial contours. However, the impact of these corrections on global left ventricular parameters was limited, introducing differences of 0.4 ± 4.1 mL for end-diastolic volume, -0.3 ± 2.9 mL for end-systolic volume, 0.7 ± 3.1 mL for stroke volume, and 0.3 ± 1.8% for ejection fraction. We conclude that left ventricular functional parameters can be obtained under 5 min from short-axis functional cardiac magnetic resonance images using automatic contour detection methods. Manual correction more than doubles analysis time, with minimal impact on left ventricular volumes and ejection fraction. Copyright © 2011 Wiley Periodicals, Inc.
Alves, Antoine; Attik, Nina; Bayon, Yves; Royet, Elodie; Wirth, Carine; Bourges, Xavier; Piat, Alexis; Dolmazon, Gaëlle; Clermont, Gaëlle; Boutrand, Jean-Pierre; Grosgogeat, Brigitte; Gritsch, Kerstin
2018-03-14
The paradigm shift brought about by the expansion of tissue engineering and regenerative medicine away from the use of biomaterials, currently questions the value of histopathologic methods in the evaluation of biological changes. To date, the available tools of evaluation are not fully consistent and satisfactory for these advanced therapies. We have developed a new, simple and inexpensive quantitative digital approach that provides key metrics for structural and compositional characterization of the regenerated tissues. For example, metrics provide the tissue ingrowth rate (TIR) which integrates two separate indicators; the cell ingrowth rate (CIR) and the total collagen content (TCC) as featured in the equation, TIR% = CIR% + TCC%. Moreover a subset of quantitative indicators describing the directional organization of the collagen (relating structure and mechanical function of tissues), the ratio of collagen I to collagen III (remodeling quality) and the optical anisotropy property of the collagen (maturity indicator) was automatically assessed as well. Using an image analyzer, all metrics were extracted from only two serial sections stained with either Feulgen & Rossenbeck (cell specific) or Picrosirius Red F3BA (collagen specific). To validate this new procedure, three-dimensional (3D) scaffolds were intraperitoneally implanted in healthy and in diabetic rats. It was hypothesized that quantitatively, the healing tissue would be significantly delayed and of poor quality in diabetic rats in comparison to healthy rats. In addition, a chemically modified 3D scaffold was similarly implanted in a third group of healthy rats with the assumption that modulation of the ingrown tissue would be quantitatively present in comparison to the 3D scaffold-healthy group. After 21 days of implantation, both hypotheses were verified by use of this novel computerized approach. When the two methods were run in parallel, the quantitative results revealed fine details and differences not detected by the semi-quantitative assessment, demonstrating the importance of quantitative analysis in the performance evaluation of soft tissue healing. This automated and supervised method reduced operator dependency and proved to be simple, sensitive, cost-effective and time-effective. It supports objective therapeutic comparisons and helps to elucidate regeneration and the dynamics of a functional tissue.
NASA Astrophysics Data System (ADS)
Irshad, Mehreen; Muhammad, Nazeer; Sharif, Muhammad; Yasmeen, Mussarat
2018-04-01
Conventionally, cardiac MR image analysis is done manually. Automatic examination for analyzing images can replace the monotonous tasks of massive amounts of data to analyze the global and regional functions of the cardiac left ventricle (LV). This task is performed using MR images to calculate the analytic cardiac parameter like end-systolic volume, end-diastolic volume, ejection fraction, and myocardial mass, respectively. These analytic parameters depend upon genuine delineation of epicardial, endocardial, papillary muscle, and trabeculations contours. In this paper, we propose an automatic segmentation method using the sum of absolute differences technique to localize the left ventricle. Blind morphological operations are proposed to segment and detect the LV contours of the epicardium and endocardium, automatically. We test the benchmark Sunny Brook dataset for evaluation of the proposed work. Contours of epicardium and endocardium are compared quantitatively to determine contour's accuracy and observe high matching values. Similarity or overlapping of an automatic examination to the given ground truth analysis by an expert are observed with high accuracy as with an index value of 91.30% . The proposed method for automatic segmentation gives better performance relative to existing techniques in terms of accuracy.
Reevaluation of pollen quantitation by an automatic pollen counter.
Muradil, Mutarifu; Okamoto, Yoshitaka; Yonekura, Syuji; Chazono, Hideaki; Hisamitsu, Minako; Horiguchi, Shigetoshi; Hanazawa, Toyoyuki; Takahashi, Yukie; Yokota, Kunihiko; Okumura, Satoshi
2010-01-01
Accurate and detailed pollen monitoring is useful for selection of medication and for allergen avoidance in patients with allergic rhinitis. Burkard and Durham pollen samplers are commonly used, but are labor and time intensive. In contrast, automatic pollen counters allow simple real-time pollen counting; however, these instruments have difficulty in distinguishing pollen from small nonpollen airborne particles. Misidentification and underestimation rates for an automatic pollen counter were examined to improve the accuracy of the pollen count. The characteristics of the automatic pollen counter were determined in a chamber study with exposure to cedar pollens or soil grains. The cedar pollen counts were monitored in 2006 and 2007, and compared with those from a Durham sampler. The pollen counts from the automatic counter showed a good correlation (r > 0.7) with those from the Durham sampler when pollen dispersal was high, but a poor correlation (r < 0.5) when pollen dispersal was low. The new correction method, which took into account the misidentification and underestimation, improved this correlation to r > 0.7 during the pollen season. The accuracy of automatic pollen counting can be improved using a correction to include rates of underestimation and misidentification in a particular geographical area.
Automated Assessment of Child Vocalization Development Using LENA
ERIC Educational Resources Information Center
Richards, Jeffrey A.; Xu, Dongxin; Gilkerson, Jill; Yapanel, Umit; Gray, Sharmistha; Paul, Terrance
2017-01-01
Purpose: To produce a novel, efficient measure of children's expressive vocal development on the basis of automatic vocalization assessment (AVA), child vocalizations were automatically identified and extracted from audio recordings using Language Environment Analysis (LENA) System technology. Method: Assessment was based on full-day audio…
Wang, Qinghua; Ri, Shien; Tsuda, Hiroshi; Kodera, Masako; Suguro, Kyoichi; Miyashita, Naoto
2017-09-19
Quantitative detection of defects in atomic structures is of great significance to evaluating product quality and exploring quality improvement process. In this study, a Fourier transform filtered sampling Moire technique was proposed to visualize and detect defects in atomic arrays in a large field of view. Defect distributions, defect numbers and defect densities could be visually and quantitatively determined from a single atomic structure image at low cost. The effectiveness of the proposed technique was verified from numerical simulations. As an application, the dislocation distributions in a GaN/AlGaN atomic structure in two directions were magnified and displayed in Moire phase maps, and defect locations and densities were detected automatically. The proposed technique is able to provide valuable references to material scientists and engineers by checking the effect of various treatments for defect reduction. © 2017 IOP Publishing Ltd.
NASA Astrophysics Data System (ADS)
Xu, Z.; Zhu, L.; Sojka, J. J.; Kokoszka, P.; Jach, A.
2006-12-01
A wavelet-based index of storm activities (WISA) has been recently developed (Jach et al., 2006) to complement the traditional Dst index. The new index can be computed automatically using the wavelet-based statistical procedure without human intervention on the selection of quiet days and the removal of secular variations. In addition, the WISA is flexible on data stretch and has a higher temporal resolution (one minute), which can provide a better description of the dynamical variations of magnetic storms. In this work, we perform a systematic assessment study on the WISA index. First, we statistically compare the WISA to the Dst for various quiet and disturbing periods and analyze the differences of their spectrum features. Then we quantitatively assess the flexibility of the WISA on data stretch and study the effects of varying number of stations on the index. In addition, how well the WISA can handle the missing data is also quantitatively assessed. The assessment results show that the hourly-averaged WISA index can describe storm activities equally well as the Dst index, but its full automation, high flexibility on data stretch, easiness of using the data from varying number of stations, high temporal resolution, and high tolerance on missing data from individual station can be very valuable and essential for real-time monitoring of the dynamical variations of magnetic storm activities and space weather applications, thus significantly complementing the existing Dst index. Jach, A., P. Kokoszka, J. Sojka, and L. Zhu, Wavelet-based index of magnetic storm activity, J. Geophys. Res., in press, 2006.
ERIC Educational Resources Information Center
Kerr, Deirdre; Mousavi, Hamid; Iseli, Markus R.
2013-01-01
The Common Core assessments emphasize short essay constructed-response items over multiple-choice items because they are more precise measures of understanding. However, such items are too costly and time consuming to be used in national assessments unless a way to score them automatically can be found. Current automatic essay-scoring techniques…
Elayavilli, Ravikumar Komandur; Liu, Hongfang
2016-01-01
Computational modeling of biological cascades is of great interest to quantitative biologists. Biomedical text has been a rich source for quantitative information. Gathering quantitative parameters and values from biomedical text is one significant challenge in the early steps of computational modeling as it involves huge manual effort. While automatically extracting such quantitative information from bio-medical text may offer some relief, lack of ontological representation for a subdomain serves as impedance in normalizing textual extractions to a standard representation. This may render textual extractions less meaningful to the domain experts. In this work, we propose a rule-based approach to automatically extract relations involving quantitative data from biomedical text describing ion channel electrophysiology. We further translated the quantitative assertions extracted through text mining to a formal representation that may help in constructing ontology for ion channel events using a rule based approach. We have developed Ion Channel ElectroPhysiology Ontology (ICEPO) by integrating the information represented in closely related ontologies such as, Cell Physiology Ontology (CPO), and Cardiac Electro Physiology Ontology (CPEO) and the knowledge provided by domain experts. The rule-based system achieved an overall F-measure of 68.93% in extracting the quantitative data assertions system on an independently annotated blind data set. We further made an initial attempt in formalizing the quantitative data assertions extracted from the biomedical text into a formal representation that offers potential to facilitate the integration of text mining into ontological workflow, a novel aspect of this study. This work is a case study where we created a platform that provides formal interaction between ontology development and text mining. We have achieved partial success in extracting quantitative assertions from the biomedical text and formalizing them in ontological framework. The ICEPO ontology is available for download at http://openbionlp.org/mutd/supplementarydata/ICEPO/ICEPO.owl.
Automatic, Multiple Assessment Options in Undergraduate Meteorology Education
ERIC Educational Resources Information Center
Kahl, Jonathan D. W.
2017-01-01
Since 2008, automatic, multiple assessment options have been utilised in selected undergraduate meteorology courses at the University of Wisconsin--Milwaukee. Motivated by a desire to reduce stress among students, the assessment methodology includes examination-heavy and homework-heavy alternatives, differing by an adjustable 15% of the overall…
Learning-based automatic detection of severe coronary stenoses in CT angiographies
NASA Astrophysics Data System (ADS)
Melki, Imen; Cardon, Cyril; Gogin, Nicolas; Talbot, Hugues; Najman, Laurent
2014-03-01
3D cardiac computed tomography angiography (CCTA) is becoming a standard routine for non-invasive heart diseases diagnosis. Thanks to its high negative predictive value, CCTA is increasingly used to decide whether or not the patient should be considered for invasive angiography. However, an accurate assessment of cardiac lesions using this modality is still a time consuming task and needs a high degree of clinical expertise. Thus, providing automatic tool to assist clinicians during the diagnosis task is highly desirable. In this work, we propose a fully automatic approach for accurate severe cardiac stenoses detection. Our algorithm uses the Random Forest classi cation to detect stenotic areas. First, the classi er is trained on 18 CT cardiac exams with CTA reference standard. Then, then classi cation result is used to detect severe stenoses (with a narrowing degree higher than 50%) in a 30 cardiac CT exam database. Features that best captures the di erent stenoses con guration are extracted along the vessel centerlines at di erent scales. To ensure the accuracy against the vessel direction and scale changes, we extract features inside cylindrical patterns with variable directions and radii. Thus, we make sure that the ROIs contains only the vessel walls. The algorithm is evaluated using the Rotterdam Coronary Artery Stenoses Detection and Quantication Evaluation Framework. The evaluation is performed using reference standard quanti cations obtained from quantitative coronary angiography (QCA) and consensus reading of CTA. The obtained results show that we can reliably detect severe stenosis with a sensitivity of 64%.
Automatic three-dimensional quantitative analysis for evaluation of facial movement.
Hontanilla, B; Aubá, C
2008-01-01
The aim of this study is to present a new 3D capture system of facial movements called FACIAL CLIMA. It is an automatic optical motion system that involves placing special reflecting dots on the subject's face and video recording with three infrared-light cameras the subject performing several face movements such as smile, mouth puckering, eye closure and forehead elevation. Images from the cameras are automatically processed with a software program that generates customised information such as 3D data on velocities and areas. The study has been performed in 20 healthy volunteers. The accuracy of the measurement process and the intrarater and interrater reliabilities have been evaluated. Comparison of a known distance and angle with those obtained by FACIAL CLIMA shows that this system is accurate to within 0.13 mm and 0.41 degrees . In conclusion, the accuracy of the FACIAL CLIMA system for evaluation of facial movements is demonstrated and also the high intrarater and interrater reliability. It has advantages with respect to other systems that have been developed for evaluation of facial movements, such as short calibration time, short measuring time, easiness to use and it provides not only distances but also velocities and areas. Thus the FACIAL CLIMA system could be considered as an adequate tool to assess the outcome of facial paralysis reanimation surgery. Thus, patients with facial paralysis could be compared between surgical centres such that effectiveness of facial reanimation operations could be evaluated.
Graham, John D; Chang, Joice
2015-02-01
The use of table saws in the United States is associated with approximately 28,000 emergency department (ED) visits and 2,000 cases of finger amputation per year. This article provides a quantitative estimate of the economic benefits of automatic protection systems that could be designed into new table saw products. Benefits are defined as reduced health-care costs, enhanced production at work, and diminished pain and suffering. The present value of the benefits of automatic protection over the life of the table saw are interpreted as the switch-point cost value, the maximum investment in automatic protection that can be justified by benefit-cost comparison. Using two alternative methods for monetizing pain and suffering, the study finds switch-point cost values of $753 and $561 per saw. These point estimates are sensitive to the values of inputs, especially the average cost of injury. The various switch-point cost values are substantially higher than rough estimates of the incremental cost of automatic protection systems. Uncertainties and future research needs are discussed. © 2014 Society for Risk Analysis.
Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L
2013-03-13
With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.
NASA Technical Reports Server (NTRS)
Bizzell, R. M.; Feiveson, A. H.; Hall, F. G.; Bauer, M. E.; Davis, B. J.; Malila, W. A.; Rice, D. P.
1975-01-01
The CITARS was an experiment designed to quantitatively evaluate crop identification performance for corn and soybeans in various environments using a well-defined set of automatic data processing (ADP) techniques. Each technique was applied to data acquired to recognize and estimate proportions of corn and soybeans. The CITARS documentation summarizes, interprets, and discusses the crop identification performances obtained using (1) different ADP procedures; (2) a linear versus a quadratic classifier; (3) prior probability information derived from historic data; (4) local versus nonlocal recognition training statistics and the associated use of preprocessing; (5) multitemporal data; (6) classification bias and mixed pixels in proportion estimation; and (7) data with differnt site characteristics, including crop, soil, atmospheric effects, and stages of crop maturity.
David, Simon; Visvikis, Dimitris; Quellec, Gwénolé; Le Rest, Catherine Cheze; Fernandez, Philippe; Allard, Michèle; Roux, Christian; Hatt, Mathieu
2012-09-01
In clinical oncology, positron emission tomography (PET) imaging can be used to assess therapeutic response by quantifying the evolution of semi-quantitative values such as standardized uptake value, early during treatment or after treatment. Current guidelines do not include metabolically active tumor volume (MATV) measurements and derived parameters such as total lesion glycolysis (TLG) to characterize the response to the treatment. To achieve automatic MATV variation estimation during treatment, we propose an approach based on the change detection principle using the recent paradoxical theory, which models imprecision, uncertainty, and conflict between sources. It was applied here simultaneously to pre- and post-treatment PET scans. The proposed method was applied to both simulated and clinical datasets, and its performance was compared to adaptive thresholding applied separately on pre- and post-treatment PET scans. On simulated datasets, the adaptive threshold was associated with significantly higher classification errors than the developed approach. On clinical datasets, the proposed method led to results more consistent with the known partial responder status of these patients. The method requires accurate rigid registration of both scans which can be obtained only in specific body regions and does not explicitly model uptake heterogeneity. In further investigations, the change detection of intra-MATV tracer uptake heterogeneity will be developed by incorporating textural features into the proposed approach.
Ohno, Yoshiharu; Koyama, Hisanobu; Nogami, Munenobu; Takenaka, Daisuke; Onishi, Yumiko; Matsumoto, Keiko; Matsumoto, Sumiaki; Maniwa, Yoshimasa; Yoshimura, Masahiro; Nishimura, Yoshihiro; Sugimura, Kazuro
2011-01-01
The purpose of this study was to compare predictive capabilities for postoperative lung function in non-small cell lung cancer (NSCLC) patients of the state-of-the-art radiological methods including perfusion MRI, quantitative CT and SPECT/CT with that of anatomical method (i.e. qualitative CT) and traditional nuclear medicine methods such as planar imaging and SPECT. Perfusion MRI, CT, nuclear medicine study and measurements of %FEV(1) before and after lung resection were performed for 229 NSCLC patients (125 men and 104 women). For perfusion MRI, postoperative %FEV(1) (po%FEV(1)) was predicted from semi-quantitatively assessed blood volumes within total and resected lungs, for quantitative CT, it was predicted from the functional lung volumes within total and resected lungs, for qualitative CT, from the number of segments of total and resected lungs, and for nuclear medicine studies, from uptakes within total and resected lungs. All SPECTs were automatically co-registered with CTs for preparation of SPECT/CTs. Predicted po%FEV(1)s were then correlated with actual po%FEV(1)s, which were measured %FEV(1)s after operation. The limits of agreement were also evaluated. All predicted po%FEV(1)s showed good correlation with actual po%FEV(1)s (0.83≤r≤0.88, p<0.0001). Perfusion MRI, quantitative CT and SPECT/CT demonstrated better correlation than other methods. The limits of agreement of perfusion MRI (4.4±14.2%), quantitative CT (4.7±14.2%) and SPECT/CT (5.1±14.7%) were less than those of qualitative CT (6.0±17.4%), planar imaging (5.8±18.2%), and SPECT (5.5±16.8%). State-of-the-art radiological methods can predict postoperative lung function in NSCLC patients more accurately than traditional methods. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.
An interactive program on digitizing historical seismograms
NASA Astrophysics Data System (ADS)
Xu, Yihe; Xu, Tao
2014-02-01
Retrieving information from analog seismograms is of great importance since they are considered as the unique sources that provide quantitative information of historical earthquakes. We present an algorithm for automatic digitization of the seismograms as an inversion problem that forms an interactive program using Matlab® GUI. The program integrates automatic digitization with manual digitization and users can easily switch between the two modalities and carry out different combinations for the optimal results. Several examples about applying the interactive program are given to illustrate the merits of the method.
Barbosa, Jocelyn; Lee, Kyubum; Lee, Sunwon; Lodhi, Bilal; Cho, Jae-Gu; Seo, Woo-Keun; Kang, Jaewoo
2016-03-12
Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree of severity. Combining iris segmentation and key point-based method has several merits that are essential for our real application. Aside from the facial key points, iris segmentation provides significant contribution as it describes the changes of the iris exposure while performing some facial expressions. It reveals the significant difference between the healthy side and the severe palsy side when raising eyebrows with both eyes directed upward, and can model the typical changes in the iris region.
Adding Automatic Evaluation to Interactive Virtual Labs
ERIC Educational Resources Information Center
Farias, Gonzalo; Muñoz de la Peña, David; Gómez-Estern, Fabio; De la Torre, Luis; Sánchez, Carlos; Dormido, Sebastián
2016-01-01
Automatic evaluation is a challenging field that has been addressed by the academic community in order to reduce the assessment workload. In this work we present a new element for the authoring tool Easy Java Simulations (EJS). This element, which is named automatic evaluation element (AEE), provides automatic evaluation to virtual and remote…
CART V: recent advancements in computer-aided camouflage assessment
NASA Astrophysics Data System (ADS)
Müller, Thomas; Müller, Markus
2011-05-01
In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods, the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment of objects in multispectral image sequences (see contributions to SPIE 2007-2010 [1], [2], [3], [4]). It comprises a semi-automatic marking of target objects (ground truth generation) including their propagation over the image sequence and the evaluation via user-defined feature extractors as well as methods to assess the object's movement conspicuity. In this fifth part in an annual series at the SPIE conference in Orlando, this paper presents the enhancements over the recent year and addresses the camouflage assessment of static and moving objects in multispectral image data that can show noise or image artefacts. The presented methods fathom the correlations between image processing and camouflage assessment. A novel algorithm is presented based on template matching to assess the structural inconspicuity of an object objectively and quantitatively. The results can easily be combined with an MTI (moving target indication) based movement conspicuity assessment function in order to explore the influence of object movement to a camouflage effect in different environments. As the results show, the presented methods contribute to a significant benefit in the field of camouflage assessment.
3D registration of surfaces for change detection in medical images
NASA Astrophysics Data System (ADS)
Fisher, Elizabeth; van der Stelt, Paul F.; Dunn, Stanley M.
1997-04-01
Spatial registration of data sets is essential for quantifying changes that take place over time in cases where the position of a patient with respect to the sensor has been altered. Changes within the region of interest can be problematic for automatic methods of registration. This research addresses the problem of automatic 3D registration of surfaces derived from serial, single-modality images for the purpose of quantifying changes over time. The registration algorithm utilizes motion-invariant, curvature- based geometric properties to derive an approximation to an initial rigid transformation to align two image sets. Following the initial registration, changed portions of the surface are detected and excluded before refining the transformation parameters. The performance of the algorithm was tested using simulation experiments. To quantitatively assess the registration, random noise at various levels, known rigid motion transformations, and analytically-defined volume changes were applied to the initial surface data acquired from models of teeth. These simulation experiments demonstrated that the calculated transformation parameters were accurate to within 1.2 percent of the total applied rotation and 2.9 percent of the total applied translation, even at the highest applied noise levels and simulated wear values.
Automated Vocal Analysis of Children with Hearing Loss and Their Typical and Atypical Peers
VanDam, Mark; Oller, D. Kimbrough; Ambrose, Sophie E.; Gray, Sharmistha; Richards, Jeffrey A.; Xu, Dongxin; Gilkerson, Jill; Silbert, Noah H.; Moeller, Mary Pat
2014-01-01
Objectives This study investigated automatic assessment of vocal development in children with hearing loss as compared with children who are typically developing, have language delays, and autism spectrum disorder. Statistical models are examined for performance in a classification model and to predict age within the four groups of children. Design The vocal analysis system analyzed over 1900 whole-day, naturalistic acoustic recordings from 273 toddlers and preschoolers comprising children who were typically developing, hard of hearing, language delayed, or autistic. Results Samples from children who were hard-of-hearing patterned more similarly to those of typically-developing children than to the language-delayed or autistic samples. The statistical models were able to classify children from the four groups examined and estimate developmental age based on automated vocal analysis. Conclusions This work shows a broad similarity between children with hearing loss and typically developing children, although children with hearing loss show some delay in their production of speech. Automatic acoustic analysis can now be used to quantitatively compare vocal development in children with and without speech-related disorders. The work may serve to better distinguish among various developmental disorders and ultimately contribute to improved intervention. PMID:25587667
A convolutional neural network for intracranial hemorrhage detection in non-contrast CT
NASA Astrophysics Data System (ADS)
Patel, Ajay; Manniesing, Rashindra
2018-02-01
The assessment of the presence of intracranial hemorrhage is a crucial step in the work-up of patients requiring emergency care. Fast and accurate detection of intracranial hemorrhage can aid treating physicians by not only expediting and guiding diagnosis, but also supporting choices for secondary imaging, treatment and intervention. However, the automatic detection of intracranial hemorrhage is complicated by the variation in appearance on non-contrast CT images as a result of differences in etiology and location. We propose a method using a convolutional neural network (CNN) for the automatic detection of intracranial hemorrhage. The method is trained on a dataset comprised of cerebral CT studies for which the presence of hemorrhage has been labeled for each axial slice. A separate test dataset of 20 images is used for quantitative evaluation and shows a sensitivity of 0.87, specificity of 0.97 and accuracy of 0.95. The average processing time for a single three-dimensional (3D) CT volume was 2.7 seconds. The proposed method is capable of fast and automated detection of intracranial hemorrhages in non-contrast CT without being limited to a specific subtype of pathology.
Scoring nuclear pleomorphism using a visual BoF modulated by a graph structure
NASA Astrophysics Data System (ADS)
Moncayo-Martínez, Ricardo; Romo-Bucheli, David; Arias, Viviana; Romero, Eduardo
2017-11-01
Nuclear pleomorphism has been recognized as a key histological criterium in breast cancer grading systems (such as Bloom Richardson and Nothingham grading systems). However, the nuclear pleomorphism assessment is subjective and presents high inter-reader variability. Automatic algorithms might facilitate quantitative estimation of nuclear variations in shape and size. Nevertheless, the automatic segmentation of the nuclei is difficult and still and open research problem. This paper presents a method using a bag of multi-scale visual features, modulated by a graph structure, to grade nuclei in breast cancer microscopical fields. This strategy constructs hematoxylin-eosin image patches, each containing a nucleus that is represented by a set of visual words in the BoF. The contribution of each visual word is computed by examining the visual words in an associated graph built when projecting the multi-dimensional BoF to a bi-dimensional plane where local relationships are conserved. The methodology was evaluated using 14 breast cancer cases of the Cancer Genome Atlas database. From these cases, a set of 134 microscopical fields was extracted, and under a leave-one-out validation scheme, an average F-score of 0.68 was obtained.
The automatic component of habit in health behavior: habit as cue-contingent automaticity.
Orbell, Sheina; Verplanken, Bas
2010-07-01
Habit might be usefully characterized as a form of automaticity that involves the association of a cue and a response. Three studies examined habitual automaticity in regard to different aspects of the cue-response relationship characteristic of unhealthy and healthy habits. In each study, habitual automaticity was assessed by the Self-Report Habit Index (SRHI). In Study 1 SRHI scores correlated with attentional bias to smoking cues in a Stroop task. Study 2 examined the ability of a habit cue to elicit an unwanted habit response. In a prospective field study, habitual automaticity in relation to smoking when drinking alcohol in a licensed public house (pub) predicted the likelihood of cigarette-related action slips 2 months later after smoking in pubs had become illegal. In Study 3 experimental group participants formed an implementation intention to floss in response to a specified situational cue. Habitual automaticity of dental flossing was rapidly enhanced compared to controls. The studies provided three different demonstrations of the importance of cues in the automatic operation of habits. Habitual automaticity assessed by the SRHI captured aspects of a habit that go beyond mere frequency or consistency of the behavior. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Das Neves Borges, Patricia; Vincent, Tonia L; Marenzana, Massimo
2017-01-01
The degradation of articular cartilage, which characterises osteoarthritis (OA), is usually paired with excessive bone remodelling, including subchondral bone sclerosis, cysts, and osteophyte formation. Experimental models of OA are widely used to investigate pathogenesis, yet few validated methodologies for assessing periarticular bone morphology exist and quantitative measurements are limited by manual segmentation of micro-CT scans. The aim of this work was to chart the temporal changes in periarticular bone in murine OA by novel, automated micro-CT methods. OA was induced by destabilisation of the medial meniscus (DMM) in 10-week old male mice and disease assessed cross-sectionally from 1- to 20-weeks post-surgery. A novel approach was developed to automatically segment subchondral bone compartments into plate and trabecular bone in micro-CT scans of tibial epiphyses. Osteophyte volume, as assessed by shape differences using 3D image registration, and by measuring total epiphyseal volume was performed. Significant linear and volumetric structural modifications in subchondral bone compartments and osteophytes were measured from 4-weeks post-surgery and showed progressive changes at all time points; by 20 weeks, medial subchondral bone plate thickness increased by 160±19.5 μm and the medial osteophyte grew by 0.124±0.028 μm3. Excellent agreement was found when automated measurements were compared with manual assessments. Our automated methods for assessing bone changes in murine periarticular bone are rapid, quantitative, and highly accurate, and promise to be a useful tool in future preclinical studies of OA progression and treatment. The current approaches were developed specifically for cross-sectional micro-CT studies but could be applied to longitudinal studies.
Vincent, Tonia L.; Marenzana, Massimo
2017-01-01
Objective The degradation of articular cartilage, which characterises osteoarthritis (OA), is usually paired with excessive bone remodelling, including subchondral bone sclerosis, cysts, and osteophyte formation. Experimental models of OA are widely used to investigate pathogenesis, yet few validated methodologies for assessing periarticular bone morphology exist and quantitative measurements are limited by manual segmentation of micro-CT scans. The aim of this work was to chart the temporal changes in periarticular bone in murine OA by novel, automated micro-CT methods. Methods OA was induced by destabilisation of the medial meniscus (DMM) in 10-week old male mice and disease assessed cross-sectionally from 1- to 20-weeks post-surgery. A novel approach was developed to automatically segment subchondral bone compartments into plate and trabecular bone in micro-CT scans of tibial epiphyses. Osteophyte volume, as assessed by shape differences using 3D image registration, and by measuring total epiphyseal volume was performed. Results Significant linear and volumetric structural modifications in subchondral bone compartments and osteophytes were measured from 4-weeks post-surgery and showed progressive changes at all time points; by 20 weeks, medial subchondral bone plate thickness increased by 160±19.5 μm and the medial osteophyte grew by 0.124±0.028 μm3. Excellent agreement was found when automated measurements were compared with manual assessments. Conclusion Our automated methods for assessing bone changes in murine periarticular bone are rapid, quantitative, and highly accurate, and promise to be a useful tool in future preclinical studies of OA progression and treatment. The current approaches were developed specifically for cross-sectional micro-CT studies but could be applied to longitudinal studies. PMID:28334010
Computerized image analysis for quantitative neuronal phenotyping in zebrafish.
Liu, Tianming; Lu, Jianfeng; Wang, Ye; Campbell, William A; Huang, Ling; Zhu, Jinmin; Xia, Weiming; Wong, Stephen T C
2006-06-15
An integrated microscope image analysis pipeline is developed for automatic analysis and quantification of phenotypes in zebrafish with altered expression of Alzheimer's disease (AD)-linked genes. We hypothesize that a slight impairment of neuronal integrity in a large number of zebrafish carrying the mutant genotype can be detected through the computerized image analysis method. Key functionalities of our zebrafish image processing pipeline include quantification of neuron loss in zebrafish embryos due to knockdown of AD-linked genes, automatic detection of defective somites, and quantitative measurement of gene expression levels in zebrafish with altered expression of AD-linked genes or treatment with a chemical compound. These quantitative measurements enable the archival of analyzed results and relevant meta-data. The structured database is organized for statistical analysis and data modeling to better understand neuronal integrity and phenotypic changes of zebrafish under different perturbations. Our results show that the computerized analysis is comparable to manual counting with equivalent accuracy and improved efficacy and consistency. Development of such an automated data analysis pipeline represents a significant step forward to achieve accurate and reproducible quantification of neuronal phenotypes in large scale or high-throughput zebrafish imaging studies.
Find Pairs: The Module for Protein Quantification of the PeakQuant Software Suite
Eisenacher, Martin; Kohl, Michael; Wiese, Sebastian; Hebeler, Romano; Meyer, Helmut E.
2012-01-01
Abstract Accurate quantification of proteins is one of the major tasks in current proteomics research. To address this issue, a wide range of stable isotope labeling techniques have been developed, allowing one to quantitatively study thousands of proteins by means of mass spectrometry. In this article, the FindPairs module of the PeakQuant software suite is detailed. It facilitates the automatic determination of protein abundance ratios based on the automated analysis of stable isotope-coded mass spectrometric data. Furthermore, it implements statistical methods to determine outliers due to biological as well as technical variance of proteome data obtained in replicate experiments. This provides an important means to evaluate the significance in obtained protein expression data. For demonstrating the high applicability of FindPairs, we focused on the quantitative analysis of proteome data acquired in 14N/15N labeling experiments. We further provide a comprehensive overview of the features of the FindPairs software, and compare these with existing quantification packages. The software presented here supports a wide range of proteomics applications, allowing one to quantitatively assess data derived from different stable isotope labeling approaches, such as 14N/15N labeling, SILAC, and iTRAQ. The software is publicly available at http://www.medizinisches-proteom-center.de/software and free for academic use. PMID:22909347
Semi-Automatic Segmentation Software for Quantitative Clinical Brain Glioblastoma Evaluation
Zhu, Y; Young, G; Xue, Z; Huang, R; You, H; Setayesh, K; Hatabu, H; Cao, F; Wong, S.T.
2012-01-01
Rationale and Objectives Quantitative measurement provides essential information about disease progression and treatment response in patients with Glioblastoma multiforme (GBM). The goal of this paper is to present and validate a software pipeline for semi-automatic GBM segmentation, called AFINITI (Assisted Follow-up in NeuroImaging of Therapeutic Intervention), using clinical data from GBM patients. Materials and Methods Our software adopts the current state-of-the-art tumor segmentation algorithms and combines them into one clinically usable pipeline. Both the advantages of the traditional voxel-based and the deformable shape-based segmentation are embedded into the software pipeline. The former provides an automatic tumor segmentation scheme based on T1- and T2-weighted MR brain data, and the latter refines the segmentation results with minimal manual input. Results Twenty six clinical MR brain images of GBM patients were processed and compared with manual results. The results can be visualized using the embedded graphic user interface (GUI). Conclusion Validation results using clinical GBM data showed high correlation between the AFINITI results and manual annotation. Compared to the voxel-wise segmentation, AFINITI yielded more accurate results in segmenting the enhanced GBM from multimodality MRI data. The proposed pipeline could be used as additional information to interpret MR brain images in neuroradiology. PMID:22591720
Qualitative and quantitative interpretation of SEM image using digital image processing.
Saladra, Dawid; Kopernik, Magdalena
2016-10-01
The aim of the this study is improvement of qualitative and quantitative analysis of scanning electron microscope micrographs by development of computer program, which enables automatic crack analysis of scanning electron microscopy (SEM) micrographs. Micromechanical tests of pneumatic ventricular assist devices result in a large number of micrographs. Therefore, the analysis must be automatic. Tests for athrombogenic titanium nitride/gold coatings deposited on polymeric substrates (Bionate II) are performed. These tests include microshear, microtension and fatigue analysis. Anisotropic surface defects observed in the SEM micrographs require support for qualitative and quantitative interpretation. Improvement of qualitative analysis of scanning electron microscope images was achieved by a set of computational tools that includes binarization, simplified expanding, expanding, simple image statistic thresholding, the filters Laplacian 1, and Laplacian 2, Otsu and reverse binarization. Several modifications of the known image processing techniques and combinations of the selected image processing techniques were applied. The introduced quantitative analysis of digital scanning electron microscope images enables computation of stereological parameters such as area, crack angle, crack length, and total crack length per unit area. This study also compares the functionality of the developed computer program of digital image processing with existing applications. The described pre- and postprocessing may be helpful in scanning electron microscopy and transmission electron microscopy surface investigations. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Feasibility of automatic evaluation of clinical rules in general practice.
Opondo, Dedan; Visscher, Stefan; Eslami, Saied; Medlock, Stephanie; Verheij, Robert; Korevaar, Joke C; Abu-Hanna, Ameen
2017-04-01
To assess the extent to which clinical rules (CRs) can be implemented for automatic evaluation of quality of care in general practice. We assessed 81 clinical rules (CRs) adapted from a subset of Assessing Care of Vulnerable Elders (ACOVE) clinical rules, against Dutch College of General Practitioners (NHG) data model. Each CR was analyzed using the Logical Elements Rule METHOD: (LERM). LERM is a stepwise method of assessing and formalizing clinical rules for decision support. Clinical rules that satisfied the criteria outlined in the LERM method were judged to be implementable in automatic evaluation in general practice. Thirty-three out of 81 (40.7%) Dutch-translated ACOVE clinical rules can be automatically evaluated in electronic medical record systems. Seven out of 7 CRs (100%) in the domain of diabetes can be automatically evaluated, 9/17 (52.9%) in medication use, 5/10 (50%) in depression care, 3/6 (50%) in nutrition care, 6/13 (46.1%) in dementia care, 1/6 (16.6%) in end of life care, 2/13 (15.3%) in continuity of care, and 0/9 (0%) in the fall-related care. Lack of documentation of care activities between primary and secondary health facilities and ambiguous formulation of clinical rules were the main reasons for the inability to automate the clinical rules. Approximately two-fifths of the primary care Dutch ACOVE-based clinical rules can be automatically evaluated. Clear definition of clinical rules, improved GP database design and electronic linkage of primary and secondary healthcare facilities can improve prospects of automatic assessment of quality of care. These findings are relevant especially because the Netherlands has very high automation of primary care. Copyright © 2017 Elsevier B.V. All rights reserved.
Socioeconomic Impact Assessment of the Los Angeles Automatic Vehicle Monitoring (AVM) Demonstration
DOT National Transportation Integrated Search
1982-09-01
This report presents a socioeconomic impact assessment of the Automatic Vehicle Monitoring (AVM) Demonstration in Los Angeles. An AVM system uses location, communication, and data processing subsystems to monitor the locations of appropriately equipp...
Assessment of Automatic Fare Collection Equipment at Three European Transit Properties
DOT National Transportation Integrated Search
1982-12-01
This report is an assessment of automatic fare collection (AFC) equipment performance conducted at three European properties in accordance with procedures defined in the Property Evaluation Plan (PEP) developed by Input Output Computer Services, Inc....
SU-C-BRA-06: Automatic Brain Tumor Segmentation for Stereotactic Radiosurgery Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y; Stojadinovic, S; Jiang, S
Purpose: Stereotactic radiosurgery (SRS), which delivers a potent dose of highly conformal radiation to the target in a single fraction, requires accurate tumor delineation for treatment planning. We present an automatic segmentation strategy, that synergizes intensity histogram thresholding, super-voxel clustering, and level-set based contour evolving methods to efficiently and accurately delineate SRS brain tumors on contrast-enhance T1-weighted (T1c) Magnetic Resonance Images (MRI). Methods: The developed auto-segmentation strategy consists of three major steps. Firstly, tumor sites are localized through 2D slice intensity histogram scanning. Then, super voxels are obtained through clustering the corresponding voxels in 3D with reference to the similaritymore » metrics composited from spatial distance and intensity difference. The combination of the above two could generate the initial contour surface. Finally, a localized region active contour model is utilized to evolve the surface to achieve the accurate delineation of the tumors. The developed method was evaluated on numerical phantom data, synthetic BRATS (Multimodal Brain Tumor Image Segmentation challenge) data, and clinical patients’ data. The auto-segmentation results were quantitatively evaluated by comparing to ground truths with both volume and surface similarity metrics. Results: DICE coefficient (DC) was performed as a quantitative metric to evaluate the auto-segmentation in the numerical phantom with 8 tumors. DCs are 0.999±0.001 without noise, 0.969±0.065 with Rician noise and 0.976±0.038 with Gaussian noise. DC, NMI (Normalized Mutual Information), SSIM (Structural Similarity) and Hausdorff distance (HD) were calculated as the metrics for the BRATS and patients’ data. Assessment of BRATS data across 25 tumor segmentation yield DC 0.886±0.078, NMI 0.817±0.108, SSIM 0.997±0.002, and HD 6.483±4.079mm. Evaluation on 8 patients with total 14 tumor sites yield DC 0.872±0.070, NMI 0.824±0.078, SSIM 0.999±0.001, and HD 5.926±6.141mm. Conclusion: The developed automatic segmentation strategy, which yields accurate brain tumor delineation in evaluation cases, is promising for its application in SRS treatment planning.« less
Jung, Chanho; Kim, Changick
2014-08-01
Automatic segmentation of cell nuclei clusters is a key building block in systems for quantitative analysis of microscopy cell images. For that reason, it has received a great attention over the last decade, and diverse automatic approaches to segment clustered nuclei with varying levels of performance under different test conditions have been proposed in literature. To the best of our knowledge, however, so far there is no comparative study on the methods. This study is a first attempt to fill this research gap. More precisely, the purpose of this study is to present an objective performance comparison of existing state-of-the-art segmentation methods. Particularly, the impact of their accuracy on classification of thyroid follicular lesions is also investigated "quantitatively" under the same experimental condition, to evaluate the applicability of the methods. Thirteen different segmentation approaches are compared in terms of not only errors in nuclei segmentation and delineation, but also their impact on the performance of system to classify thyroid follicular lesions using different metrics (e.g., diagnostic accuracy, sensitivity, specificity, etc.). Extensive experiments have been conducted on a total of 204 digitized thyroid biopsy specimens. Our study demonstrates that significant diagnostic errors can be avoided using more advanced segmentation approaches. We believe that this comprehensive comparative study serves as a reference point and guide for developers and practitioners in choosing an appropriate automatic segmentation technique adopted for building automated systems for specifically classifying follicular thyroid lesions. © 2014 International Society for Advancement of Cytometry.
FlowerMorphology: fully automatic flower morphometry software.
Rozov, Sergey M; Deineko, Elena V; Deyneko, Igor V
2018-05-01
The software FlowerMorphology is designed for automatic morphometry of actinomorphic flowers. The novel complex parameters of flowers calculated by FlowerMorphology allowed us to quantitatively characterize a polyploid series of tobacco. Morphological differences of plants representing closely related lineages or mutants are mostly quantitative. Very often, there are only very fine variations in plant morphology. Therefore, accurate and high-throughput methods are needed for their quantification. In addition, new characteristics are necessary for reliable detection of subtle changes in morphology. FlowerMorphology is an all-in-one software package to automatically image and analyze five-petal actinomorphic flowers of the dicotyledonous plants. Sixteen directly measured parameters and ten calculated complex parameters of a flower allow us to characterize variations with high accuracy. The program was developed for the needs of automatic characterization of Nicotiana tabacum flowers, but is applicable to many other plants with five-petal actinomorphic flowers and can be adopted for flowers of other merosity. A genetically similar polyploid series of N. tabacum plants was used to investigate differences in flower morphology. For the first time, we could quantify the dependence between ploidy and size and form of the tobacco flowers. We found that the radius of inner petal incisions shows a persistent positive correlation with the chromosome number. In contrast, a commonly used parameter-radius of outer corolla-does not discriminate 2n and 4n plants. Other parameters show that polyploidy leads to significant aberrations in flower symmetry and are also positively correlated with chromosome number. Executables of FlowerMorphology, source code, documentation, and examples are available at the program website: https://github.com/Deyneko/FlowerMorphology .
Quantitative features in the computed tomography of healthy lungs.
Fromson, B H; Denison, D M
1988-01-01
This study set out to determine whether quantitative features of lung computed tomography scans could be identified that would lead to a tightly defined normal range for use in assessing patients. Fourteen normal subjects with apparently healthy lungs were studied. A technique was developed for rapid and automatic extraction of lung field data from the computed tomography scans. The Hounsfield unit histograms were constructed and, when normalised for predicted lung volumes, shown to be consistent in shape for all the subjects. A three dimensional presentation of the data in the form of a "net plot" was devised, and from this a logarithmic relationship between the area of each lung slice and its mean density was derived (r = 0.9, n = 545, p less than 0.0001). The residual density, calculated as the difference between measured density and density predicted from the relationship with area, was shown to be normally distributed with a mean of 0 and a standard deviation of 25 Hounsfield units (chi 2 test: p less than 0.05). A presentation combining this residual density with the net plot is described. PMID:3353883
Quantitative analysis of chromosome condensation in fission yeast.
Petrova, Boryana; Dehler, Sascha; Kruitwagen, Tom; Hériché, Jean-Karim; Miura, Kota; Haering, Christian H
2013-03-01
Chromosomes undergo extensive conformational rearrangements in preparation for their segregation during cell divisions. Insights into the molecular mechanisms behind this still poorly understood condensation process require the development of new approaches to quantitatively assess chromosome formation in vivo. In this study, we present a live-cell microscopy-based chromosome condensation assay in the fission yeast Schizosaccharomyces pombe. By automatically tracking the three-dimensional distance changes between fluorescently marked chromosome loci at high temporal and spatial resolution, we analyze chromosome condensation during mitosis and meiosis and deduct defined parameters to describe condensation dynamics. We demonstrate that this method can determine the contributions of condensin, topoisomerase II, and Aurora kinase to mitotic chromosome condensation. We furthermore show that the assay can identify proteins required for mitotic chromosome formation de novo by isolating mutants in condensin, DNA polymerase ε, and F-box DNA helicase I that are specifically defective in pro-/metaphase condensation. Thus, the chromosome condensation assay provides a direct and sensitive system for the discovery and characterization of components of the chromosome condensation machinery in a genetically tractable eukaryote.
Quantitative Analysis of Chromosome Condensation in Fission Yeast
Petrova, Boryana; Dehler, Sascha; Kruitwagen, Tom; Hériché, Jean-Karim; Miura, Kota
2013-01-01
Chromosomes undergo extensive conformational rearrangements in preparation for their segregation during cell divisions. Insights into the molecular mechanisms behind this still poorly understood condensation process require the development of new approaches to quantitatively assess chromosome formation in vivo. In this study, we present a live-cell microscopy-based chromosome condensation assay in the fission yeast Schizosaccharomyces pombe. By automatically tracking the three-dimensional distance changes between fluorescently marked chromosome loci at high temporal and spatial resolution, we analyze chromosome condensation during mitosis and meiosis and deduct defined parameters to describe condensation dynamics. We demonstrate that this method can determine the contributions of condensin, topoisomerase II, and Aurora kinase to mitotic chromosome condensation. We furthermore show that the assay can identify proteins required for mitotic chromosome formation de novo by isolating mutants in condensin, DNA polymerase ε, and F-box DNA helicase I that are specifically defective in pro-/metaphase condensation. Thus, the chromosome condensation assay provides a direct and sensitive system for the discovery and characterization of components of the chromosome condensation machinery in a genetically tractable eukaryote. PMID:23263988
Automatic metro map layout using multicriteria optimization.
Stott, Jonathan; Rodgers, Peter; Martínez-Ovando, Juan Carlos; Walker, Stephen G
2011-01-01
This paper describes an automatic mechanism for drawing metro maps. We apply multicriteria optimization to find effective placement of stations with a good line layout and to label the map unambiguously. A number of metrics are defined, which are used in a weighted sum to find a fitness value for a layout of the map. A hill climbing optimizer is used to reduce the fitness value, and find improved map layouts. To avoid local minima, we apply clustering techniques to the map-the hill climber moves both stations and clusters when finding improved layouts. We show the method applied to a number of metro maps, and describe an empirical study that provides some quantitative evidence that automatically-drawn metro maps can help users to find routes more efficiently than either published maps or undistorted maps. Moreover, we have found that, in these cases, study subjects indicate a preference for automatically-drawn maps over the alternatives. © 2011 IEEE Published by the IEEE Computer Society
Nowinski, Wieslaw L; Thirunavuukarasuu, Arumugam; Ananthasubramaniam, Anand; Chua, Beng Choon; Qian, Guoyu; Nowinska, Natalia G; Marchenko, Yevgen; Volkau, Ihar
2009-10-01
Preparation of tests and student's assessment by the instructor are time consuming. We address these two tasks in neuroanatomy education by employing a digital media application with a three-dimensional (3D), interactive, fully segmented, and labeled brain atlas. The anatomical and vascular models in the atlas are linked to Terminologia Anatomica. Because the cerebral models are fully segmented and labeled, our approach enables automatic and random atlas-derived generation of questions to test location and naming of cerebral structures. This is done in four steps: test individualization by the instructor, test taking by the students at their convenience, automatic student assessment by the application, and communication of the individual assessment to the instructor. A computer-based application with an interactive 3D atlas and a preliminary mobile-based application were developed to realize this approach. The application works in two test modes: instructor and student. In the instructor mode, the instructor customizes the test by setting the scope of testing and student performance criteria, which takes a few seconds. In the student mode, the student is tested and automatically assessed. Self-testing is also feasible at any time and pace. Our approach is automatic both with respect to test generation and student assessment. It is also objective, rapid, and customizable. We believe that this approach is novel from computer-based, mobile-based, and atlas-assisted standpoints.
Masked Priming Effects in Aphasia: Evidence for Altered Automatic Spreading Activation
Silkes, JoAnn P.; Rogers, Margaret A.
2015-01-01
Purpose Previous research has suggested that impairments of automatic spreading activation may underlie some aphasic language deficits. This study further investigated the status of automatic spreading activation in individuals with aphasia as compared with typical adults. Method Participants were 21 individuals with aphasia (12 fluent, 9 non-fluent) and 31 typical adults. Reaction time data were collected on a lexical decision task with masked repetition primes, assessed at 11 different interstimulus intervals (ISIs). Masked primes were used to assess automatic spreading activation without the confound of conscious processing. The various ISIs were used to assess the time to onset, and duration, of priming effects. Results The control group showed maximal priming in the 200 ms ISI condition, with significant priming at a range of ISIs surrounding that peak. Participants with both fluent and non-fluent aphasia showed maximal priming effects in the 250 ms ISI condition, and primed across a smaller range of ISIs than the control group. Conclusions Results suggest that individuals with aphasia have slowed automatic spreading activation, and impaired maintenance of activation over time, regardless of fluency classification. These findings have implications for understanding aphasic language impairment and for development of aphasia treatments designed directly address automatic language processes. PMID:22411281
Masked priming effects in aphasia: evidence of altered automatic spreading activation.
Silkes, JoAnn P; Rogers, Margaret A
2012-12-01
Previous research has suggested that impairments of automatic spreading activation may underlie some aphasic language deficits. The current study further investigated the status of automatic spreading activation in individuals with aphasia as compared with typical adults. Participants were 21 individuals with aphasia (12 fluent, 9 nonfluent) and 31 typical adults. Reaction time data were collected on a lexical decision task with masked repetition primes, assessed at 11 different interstimulus intervals (ISIs). Masked primes were used to assess automatic spreading activation without the confound of conscious processing. The various ISIs were used to assess the time to onset and duration of priming effects. The control group showed maximal priming in the 200-ms ISI condition, with significant priming at a range of ISIs surrounding that peak. Participants with both fluent and nonfluent aphasia showed maximal priming effects in the 250-ms ISI condition and primed across a smaller range of ISIs than did the control group. Results suggest that individuals with aphasia have slowed automatic spreading activation and impaired maintenance of activation over time, regardless of fluency classification. These findings have implications for understanding aphasic language impairment and for development of aphasia treatments designed to directly address automatic language processes.
NASA Astrophysics Data System (ADS)
Wiemker, Rafael; Sevenster, Merlijn; MacMahon, Heber; Li, Feng; Dalal, Sandeep; Tahmasebi, Amir; Klinder, Tobias
2017-03-01
The imaging biomarkers EmphysemaPresence and NoduleSpiculation are crucial inputs for most models aiming to predict the risk of indeterminate pulmonary nodules detected at CT screening. To increase reproducibility and to accelerate screening workflow it is desirable to assess these biomarkers automatically. Validation on NLST images indicates that standard histogram measures are not sufficient to assess EmphysemaPresence in screenees. However, automatic scoring of bulla-resembling low attenuation areas can achieve agreement with experts with close to 80% sensitivity and specificity. NoduleSpiculation can be automatically assessed with similar accuracy. We find a dedicated spiculi tracing score to slightly outperform generic combinations of texture features with classifiers.
NASA Technical Reports Server (NTRS)
1983-01-01
This report summarizes the results of a study conducted by Engineering and Economics Research (EER), Inc. under NASA Contract Number NAS5-27513. The study involved the development of preliminary concepts for automatic and semiautomatic quality assurance (QA) techniques for ground image processing. A distinction is made between quality assessment and the more comprehensive quality assurance which includes decision making and system feedback control in response to quality assessment.
Laparoscopic training using a quantitative assessment and instructional system.
Yamaguchi, T; Nakamura, R
2018-04-28
Laparoscopic surgery requires complex surgical skills; hence, surgeons require regular training to improve their surgical techniques. The quantitative assessment of a surgeon's skills and the provision of feedback are important processes for conducting effective training. The aim of this study was to develop an inexpensive training system that provides automatic technique evaluation and feedback. We detected the instrument using image processing of commercial web camera images and calculated the motion analysis parameters (MAPs) of the instrument to quantify performance features. Upon receiving the results, we developed a method of evaluating the surgeon's skill level. The feedback system was developed using MAPs-based radar charts and scores for determining the skill level. These methods were evaluated using the videos of 38 surgeons performing a suturing task. There were significant differences in MAPs among surgeons; therefore, MAPs can be effectively used to quantify a surgeon's performance features. The results of skill evaluation and feedback differed greatly between skilled and unskilled surgeons, and it was possible to indicate points of improvement for the procedure performed in this study. Furthermore, the results obtained for certain novice surgeons were similar to those obtained for skilled surgeons. This system can be used to assess the skill level of surgeons, independent of the years of experience, and provide an understanding of the individual's current surgical skill level effectively. We conclude that our system is useful as an inexpensive laparoscopic training system that might aid in skill improvement.
Pionnier, Raphaël; Découfour, Nicolas; Barbier, Franck; Popineau, Christophe; Simoneau-Buessinger, Emilie
2016-03-01
The purpose of this study was to quantitatively and qualitatively assess dynamic balance with accuracy in individuals with chronic ankle instability (CAI). To this aim, a motion capture system was used while participants performed the Star Excursion Balance Test (SEBT). Reached distances for the 8 points of the star were automatically computed, thereby excluding any dependence to the experimenter. In addition, new relevant variables were also computed, such as absolute time needed to reach each distance, lower limb ranges of motion during unipodal stance, as well as absolute error of pointing. Velocity of the center of pressure and range of variation of ground reaction forces have also been assessed during the unipodal phase of the SEBT thanks to force plates. CAI group exhibited smaller reached distances and greater absolute error of pointing than the control group (p<0.05). Moreover, the ranges of motion of lower limbs joints, the velocity of the center of pressure and the range of variation of the ground reaction forces were all significantly smaller in the CAI group (p<0.05). These reduced quantitative and qualitative performances highlighted a lower dynamic postural control. The limited body movements and accelerations during the unipodal stance in the CAI group could highlight a protective strategy. The present findings could help clinicians to better understand the motor strategies used by CAI patients during dynamic balance and may guide the rehabilitation process. Copyright © 2016 Elsevier B.V. All rights reserved.
Sihong Chen; Jing Qin; Xing Ji; Baiying Lei; Tianfu Wang; Dong Ni; Jie-Zhi Cheng
2017-03-01
The gap between the computational and semantic features is the one of major factors that bottlenecks the computer-aided diagnosis (CAD) performance from clinical usage. To bridge this gap, we exploit three multi-task learning (MTL) schemes to leverage heterogeneous computational features derived from deep learning models of stacked denoising autoencoder (SDAE) and convolutional neural network (CNN), as well as hand-crafted Haar-like and HoG features, for the description of 9 semantic features for lung nodules in CT images. We regard that there may exist relations among the semantic features of "spiculation", "texture", "margin", etc., that can be explored with the MTL. The Lung Image Database Consortium (LIDC) data is adopted in this study for the rich annotation resources. The LIDC nodules were quantitatively scored w.r.t. 9 semantic features from 12 radiologists of several institutes in U.S.A. By treating each semantic feature as an individual task, the MTL schemes select and map the heterogeneous computational features toward the radiologists' ratings with cross validation evaluation schemes on the randomly selected 2400 nodules from the LIDC dataset. The experimental results suggest that the predicted semantic scores from the three MTL schemes are closer to the radiologists' ratings than the scores from single-task LASSO and elastic net regression methods. The proposed semantic attribute scoring scheme may provide richer quantitative assessments of nodules for better support of diagnostic decision and management. Meanwhile, the capability of the automatic association of medical image contents with the clinical semantic terms by our method may also assist the development of medical search engine.
Paavolainen, Lassi; Acar, Erman; Tuna, Uygar; Peltonen, Sari; Moriya, Toshio; Soonsawad, Pan; Marjomäki, Varpu; Cheng, R Holland; Ruotsalainen, Ulla
2014-01-01
Electron tomography (ET) of biological samples is used to study the organization and the structure of the whole cell and subcellular complexes in great detail. However, projections cannot be acquired over full tilt angle range with biological samples in electron microscopy. ET image reconstruction can be considered an ill-posed problem because of this missing information. This results in artifacts, seen as the loss of three-dimensional (3D) resolution in the reconstructed images. The goal of this study was to achieve isotropic resolution with a statistical reconstruction method, sequential maximum a posteriori expectation maximization (sMAP-EM), using no prior morphological knowledge about the specimen. The missing wedge effects on sMAP-EM were examined with a synthetic cell phantom to assess the effects of noise. An experimental dataset of a multivesicular body was evaluated with a number of gold particles. An ellipsoid fitting based method was developed to realize the quantitative measures elongation and contrast in an automated, objective, and reliable way. The method statistically evaluates the sub-volumes containing gold particles randomly located in various parts of the whole volume, thus giving information about the robustness of the volume reconstruction. The quantitative results were also compared with reconstructions made with widely-used weighted backprojection and simultaneous iterative reconstruction technique methods. The results showed that the proposed sMAP-EM method significantly suppresses the effects of the missing information producing isotropic resolution. Furthermore, this method improves the contrast ratio, enhancing the applicability of further automatic and semi-automatic analysis. These improvements in ET reconstruction by sMAP-EM enable analysis of subcellular structures with higher three-dimensional resolution and contrast than conventional methods.
Tracking the hyoid bone in videofluoroscopic swallowing studies
NASA Astrophysics Data System (ADS)
Kellen, Patrick M.; Becker, Darci; Reinhardt, Joseph M.; van Daele, Douglas
2008-03-01
Difficulty swallowing, or dysphagia, has become a growing problem. Swallowing complications can lead to malnutrition, dehydration, respiratory infection, and even death. The current gold standard for analyzing and diagnosing dysphagia is the videofluoroscopic barium swallow study. In these studies, a fluoroscope is used to image the patient ingesting barium solutions of different volumes and viscosities. The hyoid bone anchors many key muscles involved in swallowing and plays a key role in the process. Abnormal hyoid bone motion during a swallow can indicate swallowing dysfunction. Currently in clinical settings, hyoid bone motion is assessed qualitatively, which can be subject to intra-rater and inter-rater bias. This paper presents a semi-automatic method for tracking the hyoid bone that makes quantitative analysis feasible. The user defines a template of the hyoid on one frame, and this template is tracked across subsequent frames. The matching phase is optimized by predicting the position of the template based on kinematics. An expert speech pathologist marked the position of the hyoid on each frame of ten studies to serve as the gold standard. Results from performing Bland-Altman analysis at a 95% confidence interval showed a bias of 0.0+/-0.08 pixels in x and -0.08+/-0.09 pixels in y between the manually-defined gold standard and the proposed method. The average Pearson's correlation between the gold standard and the proposed method was 0.987 in x and 0.980 in y. This paper also presents a method for automatically establishing a patient-centric coordinate system for the interpretation of hyoid motion. This coordinate system corrects for upper body patient motion during the study and identifies superior-inferior and anterior-posterior motion components. These tools make the use of quantitative hyoid motion analysis feasible in clinical and research settings.
An unsupervised approach for measuring myocardial perfusion in MR image sequences
NASA Astrophysics Data System (ADS)
Discher, Antoine; Rougon, Nicolas; Preteux, Francoise
2005-08-01
Quantitatively assessing myocardial perfusion is a key issue for the diagnosis, therapeutic planning and patient follow-up of cardio-vascular diseases. To this end, perfusion MRI (p-MRI) has emerged as a valuable clinical investigation tool thanks to its ability of dynamically imaging the first pass of a contrast bolus in the framework of stress/rest exams. However, reliable techniques for automatically computing regional first pass curves from 2D short-axis cardiac p-MRI sequences remain to be elaborated. We address this problem and develop an unsupervised four-step approach comprising: (i) a coarse spatio-temporal segmentation step, allowing to automatically detect a region of interest for the heart over the whole sequence, and to select a reference frame with maximal myocardium contrast; (ii) a model-based variational segmentation step of the reference frame, yielding a bi-ventricular partition of the heart into left ventricle, right ventricle and myocardium components; (iii) a respiratory/cardiac motion artifacts compensation step using a novel region-driven intensity-based non rigid registration technique, allowing to elastically propagate the reference bi-ventricular segmentation over the whole sequence; (iv) a measurement step, delivering first-pass curves over each region of a segmental model of the myocardium. The performance of this approach is assessed over a database of 15 normal and pathological subjects, and compared with perfusion measurements delivered by a MRI manufacturer software package based on manual delineations by a medical expert.
Validation of automatic segmentation of ribs for NTCP modeling.
Stam, Barbara; Peulen, Heike; Rossi, Maddalena M G; Belderbos, José S A; Sonke, Jan-Jakob
2016-03-01
Determination of a dose-effect relation for rib fractures in a large patient group has been limited by the time consuming manual delineation of ribs. Automatic segmentation could facilitate such an analysis. We determine the accuracy of automatic rib segmentation in the context of normal tissue complication probability modeling (NTCP). Forty-one patients with stage I/II non-small cell lung cancer treated with SBRT to 54 Gy in 3 fractions were selected. Using the 4DCT derived mid-ventilation planning CT, all ribs were manually contoured and automatically segmented. Accuracy of segmentation was assessed using volumetric, shape and dosimetric measures. Manual and automatic dosimetric parameters Dx and EUD were tested for equivalence using the Two One-Sided T-test (TOST), and assessed for agreement using Bland-Altman analysis. NTCP models based on manual and automatic segmentation were compared. Automatic segmentation was comparable with the manual delineation in radial direction, but larger near the costal cartilage and vertebrae. Manual and automatic Dx and EUD were significantly equivalent. The Bland-Altman analysis showed good agreement. The two NTCP models were very similar. Automatic rib segmentation was significantly equivalent to manual delineation and can be used for NTCP modeling in a large patient group. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Crossley, Scott A.; Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S.
2016-01-01
This study investigates a novel approach to automatically assessing essay quality that combines natural language processing approaches that assess text features with approaches that assess individual differences in writers such as demographic information, standardized test scores, and survey results. The results demonstrate that combining text…
ECG artifact cancellation in surface EMG signals by fractional order calculus application.
Miljković, Nadica; Popović, Nenad; Djordjević, Olivera; Konstantinović, Ljubica; Šekara, Tomislav B
2017-03-01
New aspects for automatic electrocardiography artifact removal from surface electromyography signals by application of fractional order calculus in combination with linear and nonlinear moving window filters are explored. Surface electromyography recordings of skeletal trunk muscles are commonly contaminated with spike shaped artifacts. This artifact originates from electrical heart activity, recorded by electrocardiography, commonly present in the surface electromyography signals recorded in heart proximity. For appropriate assessment of neuromuscular changes by means of surface electromyography, application of a proper filtering technique of electrocardiography artifact is crucial. A novel method for automatic artifact cancellation in surface electromyography signals by applying fractional order calculus and nonlinear median filter is introduced. The proposed method is compared with the linear moving average filter, with and without prior application of fractional order calculus. 3D graphs for assessment of window lengths of the filters, crest factors, root mean square differences, and fractional calculus orders (called WFC and WRC graphs) have been introduced. For an appropriate quantitative filtering evaluation, the synthetic electrocardiography signal and analogous semi-synthetic dataset have been generated. The examples of noise removal in 10 able-bodied subjects and in one patient with muscle dystrophy are presented for qualitative analysis. The crest factors, correlation coefficients, and root mean square differences of the recorded and semi-synthetic electromyography datasets showed that the most successful method was the median filter in combination with fractional order calculus of the order 0.9. Statistically more significant (p < 0.001) ECG peak reduction was obtained by the median filter application compared to the moving average filter in the cases of low level amplitude of muscle contraction compared to ECG spikes. The presented results suggest that the novel method combining a median filter and fractional order calculus can be used for automatic filtering of electrocardiography artifacts in the surface electromyography signal envelopes recorded in trunk muscles. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Ito, Sana; Morita, Masaki
2016-01-01
Quantitative analysis of nitrilotriacetate (NTA) in detergents by titration with Cu 2+ solution using a copper ion selective electrode was achieved. This method tolerates a wide range of pH and ingredients in detergents. In addition to NTA, other chelating agents, having relatively lower stability constants toward Cu 2+ , were also qualified with sufficient accuracy by this analytical method for model detergent formulations. The titration process was automated by automatic titrating systems available commercially.
Automated Quantitative Nuclear Cardiology Methods
Motwani, Manish; Berman, Daniel S.; Germano, Guido; Slomka, Piotr J.
2016-01-01
Quantitative analysis of SPECT and PET has become a major part of nuclear cardiology practice. Current software tools can automatically segment the left ventricle, quantify function, establish myocardial perfusion maps and estimate global and local measures of stress/rest perfusion – all with minimal user input. State-of-the-art automated techniques have been shown to offer high diagnostic accuracy for detecting coronary artery disease, as well as predict prognostic outcomes. This chapter briefly reviews these techniques, highlights several challenges and discusses the latest developments. PMID:26590779
Arku, Raphael E; Birch, Aaron; Shupler, Matthew; Yusuf, Salim; Hystad, Perry; Brauer, Michael
2018-05-01
Household air pollution (HAP) from combustion of solid fuels is an important contributor to disease burden in low- and middle-income countries (LIC, and MIC). However, current HAP disease burden estimates are based on integrated exposure response curves that are not currently informed by quantitative HAP studies in LIC and MIC. While there is adequate evidence supporting causal relationships between HAP and respiratory disease, large cohort studies specifically examining relationships between quantitative measures of HAP exposure with cardiovascular disease are lacking. We aim to improve upon exposure proxies based on fuel type, and to reduce exposure misclassification by quantitatively measuring exposure across varying cooking fuel types and conditions in diverse geographies and socioeconomic settings. We leverage technology advancements to estimate household and personal PM 2.5 (particles below 2.5 μm in aerodynamic diameter) exposure within the large (N~250,000) multi-country (N~26) Prospective Urban and Rural Epidemiological (PURE) cohort study. Here, we detail the study protocol and the innovative methodologies being used to characterize HAP exposures, and their application in epidemiologic analyses. This study characterizes HAP PM 2.5 exposures for participants in rural communities in ten PURE countries with >10% solid fuel use at baseline (Bangladesh, Brazil, Chile, China, Colombia, India, Pakistan, South Africa, Tanzania, and Zimbabwe). PM 2.5 monitoring includes 48-h cooking area measurements in 4500 households and simultaneous personal monitoring of male and female pairs from 20% of the selected households. Repeat measurements occur in 20% of households to assess impacts of seasonality. Monitoring began in 2017, and will continue through 2019. The Ultrasonic Personal Aerosol Sampler (UPAS), a novel, robust, and inexpensive filter based monitor that is programmable through a dedicated mobile phone application is used for sampling. Pilot study field evaluation of cooking area measurements indicated high correlation between the UPAS and reference Harvard Impactors (r = 0.91; 95% CI: 0.84, 0.95; slope = 0.95). To facilitate tracking and to minimize contamination and analytical error, the samplers utilize barcoded filters and filter cartridges that are weighed pre- and post-sampling using a fully automated weighing system. Pump flow and pressure measurements, temperature and RH, GPS coordinates and semi-quantitative continuous particle mass concentrations based on filter differential pressure are uploaded to a central server automatically whenever the mobile phone is connected to the internet, with sampled data automatically screened for quality control parameters. A short survey is administered during the 48-h monitoring period. Post-weighed filters are further analyzed to estimate black carbon concentrations through a semi-automated, rapid, cost-effective image analysis approach. The measured PM 2.5 data will then be combined with PURE survey information on household characteristics and behaviours collected at baseline and during follow-up to develop quantitative HAP models for PM 2.5 exposures for all rural PURE participants (~50,000) and across different cooking fuel types within the 10 index countries. Both the measured (in the subset) and the modelled exposures will be used in separate longitudinal epidemiologic analyses to assess associations with cardiopulmonary mortality, and disease incidence. The collected data and resulting characterization of cooking area and personal PM 2.5 exposures in multiple rural communities from 10 countries will better inform exposure assessment as well as future epidemiologic analyses assessing the relationships between quantitative estimates of chronic HAP exposure with adult mortality and incident cardiovascular and respiratory disease. This will provide refined and more accurate exposure estimates in global CVD related exposure-response analyses. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Xu, Zhonghua; Zhu, Lie; Sojka, Jan; Kokoszka, Piotr; Jach, Agnieszka
2008-08-01
A wavelet-based index of storm activity (WISA) has been recently developed [Jach, A., Kokoszka, P., Sojka, L., Zhu, L., 2006. Wavelet-based index of magnetic storm activity. Journal of Geophysical Research 111, A09215, doi:10.1029/2006JA011635] to complement the traditional Dst index. The new index can be computed automatically by using the wavelet-based statistical procedure without human intervention on the selection of quiet days and the removal of secular variations. In addition, the WISA is flexible on data stretch and has a higher temporal resolution (1 min), which can provide a better description of the dynamical variations of magnetic storms. In this work, we perform a systematic assessment study on the WISA index. First, we statistically compare the WISA to the Dst for various quiet and disturbed periods and analyze the differences of their spectral features. Then we quantitatively assess the flexibility of the WISA on data stretch and study the effects of varying number of stations on the index. In addition, the ability of the WISA for handling the missing data is also quantitatively assessed. The assessment results show that the hourly averaged WISA index can describe storm activities equally well as the Dst index, but its full automation, high flexibility on data stretch, easiness of using the data from varying number of stations, high temporal resolution, and high tolerance to missing data from individual station can be very valuable and essential for real-time monitoring of the dynamical variations of magnetic storm activities and space weather applications, thus significantly complementing the existing Dst index.
Long, Zaiyang; Tradup, Donald J; Stekel, Scott F; Gorny, Krzysztof R; Hangiandreou, Nicholas J
2018-03-01
We evaluated a commercially available software package that uses B-mode images to semi-automatically measure quantitative metrics of ultrasound image quality, such as contrast response, depth of penetration (DOP), and spatial resolution (lateral, axial, and elevational). Since measurement of elevational resolution is not a part of the software package, we achieved it by acquiring phantom images with transducers tilted at 45 degrees relative to the phantom. Each measurement was assessed in terms of measurement stability, sensitivity, repeatability, and semi-automated measurement success rate. All assessments were performed on a GE Logiq E9 ultrasound system with linear (9L or 11L), curved (C1-5), and sector (S1-5) transducers, using a CIRS model 040GSE phantom. In stability tests, the measurements of contrast, DOP, and spatial resolution remained within a ±10% variation threshold in 90%, 100%, and 69% of cases, respectively. In sensitivity tests, contrast, DOP, and spatial resolution measurements followed the expected behavior in 100%, 100%, and 72% of cases, respectively. In repeatability testing, intra- and inter-individual coefficients of variations were equal to or less than 3.2%, 1.3%, and 4.4% for contrast, DOP, and spatial resolution (lateral and axial), respectively. The coefficients of variation corresponding to the elevational resolution test were all within 9.5%. Overall, in our assessment, the evaluated package performed well for objective and quantitative assessment of the above-mentioned image qualities under well-controlled acquisition conditions. We are finding it to be useful for various clinical ultrasound applications including performance comparison between scanners from different vendors. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Wang, Bei; Wang, Xingyu; Ikeda, Akio; Nagamine, Takashi; Shibasaki, Hiroshi; Nakamura, Masatoshi
2014-01-01
EEG (Electroencephalograph) interpretation is important for the diagnosis of neurological disorders. The proper adjustment of the montage can highlight the EEG rhythm of interest and avoid false interpretation. The aim of this study was to develop an automatic reference selection method to identify a suitable reference. The results may contribute to the accurate inspection of the distribution of EEG rhythms for quantitative EEG interpretation. The method includes two pre-judgements and one iterative detection module. The diffuse case is initially identified by pre-judgement 1 when intermittent rhythmic waveforms occur over large areas along the scalp. The earlobe reference or averaged reference is adopted for the diffuse case due to the effect of the earlobe reference depending on pre-judgement 2. An iterative detection algorithm is developed for the localised case when the signal is distributed in a small area of the brain. The suitable averaged reference is finally determined based on the detected focal and distributed electrodes. The presented technique was applied to the pathological EEG recordings of nine patients. One example of the diffuse case is introduced by illustrating the results of the pre-judgements. The diffusely intermittent rhythmic slow wave is identified. The effect of active earlobe reference is analysed. Two examples of the localised case are presented, indicating the results of the iterative detection module. The focal and distributed electrodes are detected automatically during the repeating algorithm. The identification of diffuse and localised activity was satisfactory compared with the visual inspection. The EEG rhythm of interest can be highlighted using a suitable selected reference. The implementation of an automatic reference selection method is helpful to detect the distribution of an EEG rhythm, which can improve the accuracy of EEG interpretation during both visual inspection and automatic interpretation. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
Paproki, A; Engstrom, C; Chandra, S S; Neubert, A; Fripp, J; Crozier, S
2014-09-01
To validate an automatic scheme for the segmentation and quantitative analysis of the medial meniscus (MM) and lateral meniscus (LM) in magnetic resonance (MR) images of the knee. We analysed sagittal water-excited double-echo steady-state MR images of the knee from a subset of the Osteoarthritis Initiative (OAI) cohort. The MM and LM were automatically segmented in the MR images based on a deformable model approach. Quantitative parameters including volume, subluxation and tibial-coverage were automatically calculated for comparison (Wilcoxon tests) between knees with variable radiographic osteoarthritis (rOA), medial and lateral joint space narrowing (mJSN, lJSN) and pain. Automatic segmentations and estimated parameters were evaluated for accuracy using manual delineations of the menisci in 88 pathological knee MR examinations at baseline and 12 months time-points. The median (95% confidence-interval (CI)) Dice similarity index (DSI) (2 ∗|Auto ∩ Manual|/(|Auto|+|Manual|)∗ 100) between manual and automated segmentations for the MM and LM volumes were 78.3% (75.0-78.7), 83.9% (82.1-83.9) at baseline and 75.3% (72.8-76.9), 83.0% (81.6-83.5) at 12 months. Pearson coefficients between automatic and manual segmentation parameters ranged from r = 0.70 to r = 0.92. MM in rOA/mJSN knees had significantly greater subluxation and smaller tibial-coverage than no-rOA/no-mJSN knees. LM in rOA knees had significantly greater volumes and tibial-coverage than no-rOA knees. Our automated method successfully segmented the menisci in normal and osteoarthritic knee MR images and detected meaningful morphological differences with respect to rOA and joint space narrowing (JSN). Our approach will facilitate analyses of the menisci in prospective MR cohorts such as the OAI for investigations into pathophysiological changes occurring in early osteoarthritis (OA) development. Copyright © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Automatic inference of multicellular regulatory networks using informative priors.
Sun, Xiaoyun; Hong, Pengyu
2009-01-01
To fully understand the mechanisms governing animal development, computational models and algorithms are needed to enable quantitative studies of the underlying regulatory networks. We developed a mathematical model based on dynamic Bayesian networks to model multicellular regulatory networks that govern cell differentiation processes. A machine-learning method was developed to automatically infer such a model from heterogeneous data. We show that the model inference procedure can be greatly improved by incorporating interaction data across species. The proposed approach was applied to C. elegans vulval induction to reconstruct a model capable of simulating C. elegans vulval induction under 73 different genetic conditions.
A Clustering-Based Approach to Enriching Code Foraging Environment.
Niu, Nan; Jin, Xiaoyu; Niu, Zhendong; Cheng, Jing-Ru C; Li, Ling; Kataev, Mikhail Yu
2016-09-01
Developers often spend valuable time navigating and seeking relevant code in software maintenance. Currently, there is a lack of theoretical foundations to guide tool design and evaluation to best shape the code base to developers. This paper contributes a unified code navigation theory in light of the optimal food-foraging principles. We further develop a novel framework for automatically assessing the foraging mechanisms in the context of program investigation. We use the framework to examine to what extent the clustering of software entities affects code foraging. Our quantitative analysis of long-lived open-source projects suggests that clustering enriches the software environment and improves foraging efficiency. Our qualitative inquiry reveals concrete insights into real developer's behavior. Our research opens the avenue toward building a new set of ecologically valid code navigation tools.
Quantification of osteolytic bone lesions in a preclinical rat trial
NASA Astrophysics Data System (ADS)
Fränzle, Andrea; Bretschi, Maren; Bäuerle, Tobias; Giske, Kristina; Hillengass, Jens; Bendl, Rolf
2013-10-01
In breast cancer, most of the patients who died, have developed bone metastasis as disease progression. Bone metastases in case of breast cancer are mainly bone destructive (osteolytic). To understand pathogenesis and to analyse response to different treatments, animal models, in our case rats, are examined. For assessment of treatment response to bone remodelling therapies exact segmentations of osteolytic lesions are needed. Manual segmentations are not only time-consuming but lack in reproducibility. Computerized segmentation tools are essential. In this paper we present an approach for the computerized quantification of osteolytic lesion volumes using a comparison to a healthy reference model. The presented qualitative and quantitative evaluation of the reconstructed bone volumes show, that the automatically segmented lesion volumes complete missing bone in a reasonable way.
Gentry, Sarah; Forouhi, Nita; Notley, Caitlin
2018-03-28
Smoking prevalence remains high in some vulnerable groups, including those who misuse substances, have a mental illness, are homeless or are involved with the criminal justice system. E-cigarette use is increasing and may support smoking cessation/reduction. Systematic review of quantitative and qualitative data on the effectiveness of e-cigarettes for smoking cessation/reduction among vulnerable groups. Databases searched were MEDLINE, EMBASE, PsychINFO, CINAHL, ASSIA, ProQuest Dissertations and Theses and Open Grey. Narrative synthesis of quantitative data and thematic synthesis of qualitative data. 2628 records and 46 full texts were screened; 9 studies were identified for inclusion. Due to low quality of evidence, it is uncertain whether e-cigarettes are effective for smoking cessation in vulnerable populations. A moderate quality study suggested e-cigarettes were as effective as nicotine replacement therapy. Four studies suggested significant smoking reduction, however three were uncontrolled and had sample sizes below 30. A prospective cohort study found no differences between e-cigarette users and non-users. No significant adverse events and minimal side effects were identified. Qualitative thematic synthesis revealed barriers and facilitators associated with each component of the COM-B (capability, opportunity, motivation, behaviour) model, including practical barriers; perceptions of effectiveness for cessation/reduction; design features contributing to automatic and reflective motivation; smoking bans facilitating practical opportunity; and social connectedness increasing social opportunity. Further research is needed to identify the most appropriate device types for practicality and safety, level of support required in e-cigarette interventions, and to compare e-cigarettes with current best practice smoking cessation support among vulnerable groups. Smoking prevalence among people with mental illness, substance misuse, homelessness or criminal justice system involvement remains high. E-cigarettes could support cessation. This systematic review found limited quantitative evidence assessing effectiveness. No serious adverse events were identified. Qualitative thematic synthesis revealed barriers and facilitators mapping to each component of the COM-B (capability, opportunity, motivation, behaviour) model, including practical barriers; perceived effectiveness; design features contributing to automatic and reflective motivation; smoking bans facilitating practical opportunity; and social connectedness increasing social opportunity. Further research should consider appropriate devices for practicality and safety, concurrent support, and comparison with best practice smoking cessation support.
Automatic pelvis segmentation from x-ray images of a mouse model
NASA Astrophysics Data System (ADS)
Al Okashi, Omar M.; Du, Hongbo; Al-Assam, Hisham
2017-05-01
The automatic detection and quantification of skeletal structures has a variety of different applications for biological research. Accurate segmentation of the pelvis from X-ray images of mice in a high-throughput project such as the Mouse Genomes Project not only saves time and cost but also helps achieving an unbiased quantitative analysis within the phenotyping pipeline. This paper proposes an automatic solution for pelvis segmentation based on structural and orientation properties of the pelvis in X-ray images. The solution consists of three stages including pre-processing image to extract pelvis area, initial pelvis mask preparation and final pelvis segmentation. Experimental results on a set of 100 X-ray images showed consistent performance of the algorithm. The automated solution overcomes the weaknesses of a manual annotation procedure where intra- and inter-observer variations cannot be avoided.
Examination of a cognitive model of stress, burnout, and intention to resign for Japanese nurses.
Ohue, Takashi; Moriyama, Michiko; Nakaya, Takashi
2011-06-01
A reduction in burnout is required to decrease the voluntary turnover of nurses. This study was carried out with the aim of establishing a cognitive model of stress, burnout, and intention to resign for nurses. A questionnaire survey was administered to 336 nurses (27 male and 309 female) who had worked for ≤5 years at a hospital with multiple departments. The survey included an evaluation of burnout (Maslach Burnout Inventory), stress (Nursing Job Stressor Scale), automatic thoughts (Automatic Thoughts Questionnaire-Revised), and irrational beliefs (Japanese Irrational Belief Test), in addition to the intention to resign. The stressors that affected burnout in the nurses included conflict with other nursing staff, nursing role conflict, qualitative workload, quantitative workload, and conflict with patients. The irrational beliefs that were related to burnout included dependence, problem avoidance, and helplessness. In order to examine the automatic thoughts affecting burnout, groups with low and high negative automatic thoughts and low and high positive automatic thoughts were established. A two-way ANOVA showed a significant interaction of these factors with emotional exhaustion, but no significant interaction with depersonalization and a personal sense of accomplishment. Only the major effect was significant. The final model showed a process of "stressor → irrational beliefs → negative automatic thoughts/positive automatic thoughts → burnout". In addition, a relationship between burnout and an intention to resign was shown. These results suggest that stress and burnout in nurses might be prevented and that the number of nurses who leave their position could be decreased by changing irrational beliefs to rational beliefs, decreasing negative automatic thoughts, and facilitating positive automatic thoughts. © 2010 The Authors. Japan Journal of Nursing Science © 2010 Japan Academy of Nursing Science.
Automatic mental associations predict future choices of undecided decision-makers.
Galdi, Silvia; Arcuri, Luciano; Gawronski, Bertram
2008-08-22
Common wisdom holds that choice decisions are based on conscious deliberations of the available information about choice options. On the basis of recent insights about unconscious influences on information processing, we tested whether automatic mental associations of undecided individuals bias future choices in a manner such that these choices reflect the evaluations implied by earlier automatic associations. With the use of a computer-based, speeded categorization task to assess automatic mental associations (i.e., associations that are activated unintentionally, difficult to control, and not necessarily endorsed at a conscious level) and self-report measures to assess consciously endorsed beliefs and choice preferences, automatic associations of undecided participants predicted changes in consciously reported beliefs and future choices over a period of 1 week. Conversely, for decided participants, consciously reported beliefs predicted changes in automatic associations and future choices over the same period. These results indicate that decision-makers sometimes have already made up their mind at an unconscious level, even when they consciously indicate that they are still undecided.
Affective Evaluations of Exercising: The Role of Automatic-Reflective Evaluation Discrepancy.
Brand, Ralf; Antoniewicz, Franziska
2016-12-01
Sometimes our automatic evaluations do not correspond well with those we can reflect on and articulate. We present a novel approach to the assessment of automatic and reflective affective evaluations of exercising. Based on the assumptions of the associative-propositional processes in evaluation model, we measured participants' automatic evaluations of exercise and then shared this information with them, asked them to reflect on it and rate eventual discrepancy between their reflective evaluation and the assessment of their automatic evaluation. We found that mismatch between self-reported ideal exercise frequency and actual exercise frequency over the previous 14 weeks could be regressed on the discrepancy between a relatively negative automatic and a more positive reflective evaluation. This study illustrates the potential of a dual-process approach to the measurement of evaluative responses and suggests that mistrusting one's negative spontaneous reaction to exercise and asserting a very positive reflective evaluation instead leads to the adoption of inflated exercise goals.
ERIC Educational Resources Information Center
Clayton, Francina J.; Hulme, Charles
2018-01-01
The automatic letter-sound integration hypothesis proposes that the decoding difficulties seen in dyslexia arise from a specific deficit in establishing automatic letter-sound associations. We report the findings of 2 studies in which we used a priming task to assess automatic letter-sound integration. In Study 1, children between 5 and 7 years of…
Automatic atlas-based three-label cartilage segmentation from MR knee images
Shan, Liang; Zach, Christopher; Charles, Cecil; Niethammer, Marc
2016-01-01
Osteoarthritis (OA) is the most common form of joint disease and often characterized by cartilage changes. Accurate quantitative methods are needed to rapidly screen large image databases to assess changes in cartilage morphology. We therefore propose a new automatic atlas-based cartilage segmentation method for future automatic OA studies. Atlas-based segmentation methods have been demonstrated to be robust and accurate in brain imaging and therefore also hold high promise to allow for reliable and high-quality segmentations of cartilage. Nevertheless, atlas-based methods have not been well explored for cartilage segmentation. A particular challenge is the thinness of cartilage, its relatively small volume in comparison to surrounding tissue and the difficulty to locate cartilage interfaces – for example the interface between femoral and tibial cartilage. This paper focuses on the segmentation of femoral and tibial cartilage, proposing a multi-atlas segmentation strategy with non-local patch-based label fusion which can robustly identify candidate regions of cartilage. This method is combined with a novel three-label segmentation method which guarantees the spatial separation of femoral and tibial cartilage, and ensures spatial regularity while preserving the thin cartilage shape through anisotropic regularization. Our segmentation energy is convex and therefore guarantees globally optimal solutions. We perform an extensive validation of the proposed method on 706 images of the Pfizer Longitudinal Study. Our validation includes comparisons of different atlas segmentation strategies, different local classifiers, and different types of regularizers. To compare to other cartilage segmentation approaches we validate based on the 50 images of the SKI10 dataset. PMID:25128683
ERIC Educational Resources Information Center
Pujayanto, Pujayanto; Budiharti, Rini; Adhitama, Egy; Nuraini, Niken Rizky Amalia; Putri, Hanung Vernanda
2018-01-01
This research proposes the development of a web-based assessment system to identify students' misconception. The system, named WAS (web-based assessment system), can identify students' misconception profile on linear kinematics automatically after the student has finished the test. The test instrument was developed and validated. Items were…
Automatic retinal interest evaluation system (ARIES).
Yin, Fengshou; Wong, Damon Wing Kee; Yow, Ai Ping; Lee, Beng Hai; Quan, Ying; Zhang, Zhuo; Gopalakrishnan, Kavitha; Li, Ruoying; Liu, Jiang
2014-01-01
In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases such as glaucoma, age-related macular degeneration and diabetic retinopathy. However, in practice, retinal image quality is a big concern as automatic systems without consideration of degraded image quality will likely generate unreliable results. In this paper, an automatic retinal image quality assessment system (ARIES) is introduced to assess both image quality of the whole image and focal regions of interest. ARIES achieves 99.54% accuracy in distinguishing fundus images from other types of images through a retinal image identification step in a dataset of 35342 images. The system employs high level image quality measures (HIQM) to perform image quality assessment, and achieves areas under curve (AUCs) of 0.958 and 0.987 for whole image and optic disk region respectively in a testing dataset of 370 images. ARIES acts as a form of automatic quality control which ensures good quality images are used for processing, and can also be used to alert operators of poor quality images at the time of acquisition.
Wearable Monitoring Devices for Assistive Technology: Case Studies in Post-Polio Syndrome
Andreoni, Giuseppe; Mazzola, Marco; Perego, Paolo; Standoli, Carlo Emilio; Manzoni, Simone; Piccini, Luca; Molteni, Franco
2014-01-01
The correct choice and customization of an orthosis are crucial to obtain the best comfort and efficiency. This study explored the feasibility of a multivariate quantitative assessment of the functional efficiency of lower limb orthosis through a novel wearable system. Gait basographic parameters and energetic indexes were analysed during a Six-Minute Walking Test (6-MWT) through a cost-effective, non-invasive polygraph device, with a multichannel wireless transmission, that carried out electro-cardiograph (ECG); impedance-cardiograph (ICG); and lower-limb accelerations detection. Four subjects affected by Post-Polio Syndrome (PPS) were recruited. The wearable device and the semi-automatic post-processing software provided a novel set of objective data to assess the overall efficiency of the patient-orthosis system. Despite the small number of examined subjects, the results obtained with this new approach encourage the application of the method thus enlarging the dataset to validate this promising protocol and measuring system in supporting clinical decisions and out of a laboratory environment. PMID:24469354
Berens, Angelique M; Harbison, Richard Alex; Li, Yangming; Bly, Randall A; Aghdasi, Nava; Ferreira, Manuel; Hannaford, Blake; Moe, Kris S
2017-08-01
To develop a method to measure intraoperative surgical instrument motion. This model will be applicable to the study of surgical instrument kinematics including surgical training, skill verification, and the development of surgical warning systems that detect aberrant instrument motion that may result in patient injury. We developed an algorithm to automate derivation of surgical instrument kinematics in an endoscopic endonasal skull base surgery model. Surgical instrument motion was recorded during a cadaveric endoscopic transnasal approach to the pituitary using a navigation system modified to record intraoperative time-stamped Euclidian coordinates and Euler angles. Microdebrider tip coordinates and angles were referenced to the cadaver's preoperative computed tomography scan allowing us to assess surgical instrument kinematics over time. A representative cadaveric endoscopic endonasal approach to the pituitary was performed to demonstrate feasibility of our algorithm for deriving surgical instrument kinematics. Technical feasibility of automatically measuring intraoperative surgical instrument motion and deriving kinematics measurements was demonstrated using standard navigation equipment.
Selecting Cases for Intensive Analysis: A Diversity of Goals and Methods
ERIC Educational Resources Information Center
Gerring, John; Cojocaru, Lee
2016-01-01
This study revisits the task of case selection in case study research, proposing a new typology of strategies that is explicit, disaggregated, and relatively comprehensive. A secondary goal is to explore the prospects for case selection by "algorithm," aka "ex ante," "automatic," "quantitative,"…
Microfluidic platform integrated with worm-counting setup for assessing manganese toxicity
Zhang, Beibei; Li, Yinbao; He, Qidi; Qin, Jun; Yu, Yanyan; Li, Xinchun; Zhang, Lin; Yao, Meicun; Liu, Junshan; Chen, Zuanguang
2014-01-01
We reported a new microfluidic system integrated with worm responders for evaluating the environmental manganese toxicity. The micro device consists of worm loading units, worm observing chambers, and a radial concentration gradient generator (CGG). Eight T-shape worm loading units of the micro device were used to load the exact number of worms into the corresponding eight chambers with the assistance of worm responders and doorsills. The worm responder, as a key component, was employed for performing automated worm-counting assay through electric impedance sensing. This label-free and non-invasive worm-counting technique was applied to the microsystem for the first time. In addition, the disk-shaped CGG can generate a range of stepwise concentrations of the appointed chemical automatically and simultaneously. Due to the scalable architecture of radial CGG, it has the potential to increase the throughput of the assay. Dopaminergic (DAergic) neurotoxicity of manganese on C. elegans was quantitatively assessed via the observation of green fluorescence protein-tagged DAergic neurons of the strain BZ555 on-chip. In addition, oxidative stress triggered by manganese was evaluated by the quantitative fluorescence intensity of the strain CL2166. By scoring the survival ratio and stroke frequency of worms, we characterized the dose- and time-dependent mobility defects of the manganese-exposed worms. Furthermore, we applied the microsystem to investigate the effect of natural antioxidants to protect manganese-induced toxicity. PMID:25538805
Kirişli, H A; Schaap, M; Metz, C T; Dharampal, A S; Meijboom, W B; Papadopoulou, S L; Dedic, A; Nieman, K; de Graaf, M A; Meijs, M F L; Cramer, M J; Broersen, A; Cetin, S; Eslami, A; Flórez-Valencia, L; Lor, K L; Matuszewski, B; Melki, I; Mohr, B; Oksüz, I; Shahzad, R; Wang, C; Kitslaar, P H; Unal, G; Katouzian, A; Örkisz, M; Chen, C M; Precioso, F; Najman, L; Masood, S; Ünay, D; van Vliet, L; Moreno, R; Goldenberg, R; Vuçini, E; Krestin, G P; Niessen, W J; van Walsum, T
2013-12-01
Though conventional coronary angiography (CCA) has been the standard of reference for diagnosing coronary artery disease in the past decades, computed tomography angiography (CTA) has rapidly emerged, and is nowadays widely used in clinical practice. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms devised to detect and quantify the coronary artery stenoses, and to segment the coronary artery lumen in CTA data. The objective of this evaluation framework is to demonstrate the feasibility of dedicated algorithms to: (1) (semi-)automatically detect and quantify stenosis on CTA, in comparison with quantitative coronary angiography (QCA) and CTA consensus reading, and (2) (semi-)automatically segment the coronary lumen on CTA, in comparison with expert's manual annotation. A database consisting of 48 multicenter multivendor cardiac CTA datasets with corresponding reference standards are described and made available. The algorithms from 11 research groups were quantitatively evaluated and compared. The results show that (1) some of the current stenosis detection/quantification algorithms may be used for triage or as a second-reader in clinical practice, and that (2) automatic lumen segmentation is possible with a precision similar to that obtained by experts. The framework is open for new submissions through the website, at http://coronary.bigr.nl/stenoses/. Copyright © 2013 Elsevier B.V. All rights reserved.
Puzzle test: A tool for non-analytical clinical reasoning assessment.
Monajemi, Alireza; Yaghmaei, Minoo
2016-01-01
Most contemporary clinical reasoning tests typically assess non-automatic thinking. Therefore, a test is needed to measure automatic reasoning or pattern recognition, which has been largely neglected in clinical reasoning tests. The Puzzle Test (PT) is dedicated to assess automatic clinical reasoning in routine situations. This test has been introduced first in 2009 by Monajemi et al in the Olympiad for Medical Sciences Students.PT is an item format that has gained acceptance in medical education, but no detailed guidelines exist for this test's format, construction and scoring. In this article, a format is described and the steps to prepare and administer valid and reliable PTs are presented. PT examines a specific clinical reasoning task: Pattern recognition. PT does not replace other clinical reasoning assessment tools. However, it complements them in strategies for assessing comprehensive clinical reasoning.
Liu, Peiying; Lu, Hanzhang; Filbey, Francesca M.; Pinkham, Amy E.; McAdams, Carrie J.; Adinoff, Bryon; Daliparthi, Vamsi; Cao, Yan
2014-01-01
Phase-Contrast MRI (PC-MRI) is a noninvasive technique to measure blood flow. In particular, global but highly quantitative cerebral blood flow (CBF) measurement using PC-MRI complements several other CBF mapping methods such as arterial spin labeling and dynamic susceptibility contrast MRI by providing a calibration factor. The ability to estimate blood supply in physiological units also lays a foundation for assessment of brain metabolic rate. However, a major obstacle before wider applications of this method is that the slice positioning of the scan, ideally placed perpendicular to the feeding arteries, requires considerable expertise and can present a burden to the operator. In the present work, we proposed that the majority of PC-MRI scans can be positioned using an automatic algorithm, leaving only a small fraction of arteries requiring manual positioning. We implemented and evaluated an algorithm for this purpose based on feature extraction of a survey angiogram, which is of minimal operator dependence. In a comparative test-retest study with 7 subjects, the blood flow measurement using this algorithm showed an inter-session coefficient of variation (CoV) of . The Bland-Altman method showed that the automatic method differs from the manual method by between and , for of the CBF measurements. This is comparable to the variance in CBF measurement using manually-positioned PC MRI alone. In a further application of this algorithm to 157 consecutive subjects from typical clinical cohorts, the algorithm provided successful positioning in 89.7% of the arteries. In 79.6% of the subjects, all four arteries could be planned using the algorithm. Chi-square tests of independence showed that the success rate was not dependent on the age or gender, but the patients showed a trend of lower success rate (p = 0.14) compared to healthy controls. In conclusion, this automatic positioning algorithm could improve the application of PC-MRI in CBF quantification. PMID:24787742
Autoradiographic method for quantitation of deposition and distribution of radiocalcium in bone
Lawrence Riggs, B; Bassingthwaighte, James B.; Jowsey, Jenifer; Peter Pequegnat, E
2010-01-01
A method is described for quantitating autoradiographs of bone-seeking isotopes in microscopic sections of bone. Autoradiographs of bone sections containing 45Ca and internal calibration standards are automatically scanned with a microdensitometer. The digitized optical density output is stored on magnetic tape and is converted by computer to equivalent activity of 45Ca per gram of bone. The computer determines the total 45Ca uptake in the bone section and, on the basis of optical density and anatomic position, quantitatively divides the uptake into 4 components, each representing a separate physiologic process (bone formation, secondary mineralization, diffuse long-term exchange, and surface short-term exchange). The method is also applicable for quantitative analysis of microradiographs of bone sections for mineral content and density. PMID:5416906
Automatic safety belt systems : changes in owner usage over time in GM Chevettes and VW Rabbits
DOT National Transportation Integrated Search
1981-08-01
This study was designed to: (1) determine any decrement in use of the automatic restraint system, and (2) assess any change in owners' attitudes toward the automatic restraint system over a two year period. The information gathered will assist the NH...
Automatic Item Generation of Probability Word Problems
ERIC Educational Resources Information Center
Holling, Heinz; Bertling, Jonas P.; Zeuch, Nina
2009-01-01
Mathematical word problems represent a common item format for assessing student competencies. Automatic item generation (AIG) is an effective way of constructing many items with predictable difficulties, based on a set of predefined task parameters. The current study presents a framework for the automatic generation of probability word problems…
Sheets, Alison L; Lai, Po-Lun; Fisher, Lesley C; Basso, D Michele
2013-01-01
Thousands of scientists strive to identify cellular mechanisms that could lead to breakthroughs in developing ameliorative treatments for debilitating neural and muscular conditions such as spinal cord injury (SCI). Most studies use rodent models to test hypotheses, and these are all limited by the methods available to evaluate animal motor function. This study's goal was to develop a behavioral and locomotor assessment system in a murine model of SCI that enables quantitative kinematic measurements to be made automatically in the open-field by applying markerless motion tracking approaches. Three-dimensional movements of eight naïve, five mild, five moderate, and four severe SCI mice were recorded using 10 cameras (100 Hz). Background subtraction was used in each video frame to identify the animal's silhouette, and the 3D shape at each time was reconstructed using shape-from-silhouette. The reconstructed volume was divided into front and back halves using k-means clustering. The animal's front Center of Volume (CoV) height and whole-body CoV speed were calculated and used to automatically classify animal behaviors including directed locomotion, exploratory locomotion, meandering, standing, and rearing. More detailed analyses of CoV height, speed, and lateral deviation during directed locomotion revealed behavioral differences and functional impairments in animals with mild, moderate, and severe SCI when compared with naïve animals. Naïve animals displayed the widest variety of behaviors including rearing and crossing the center of the open-field, the fastest speeds, and tallest rear CoV heights. SCI reduced the range of behaviors, and decreased speed (r = .70 p<.005) and rear CoV height (r = .65 p<.01) were significantly correlated with greater lesion size. This markerless tracking approach is a first step toward fundamentally changing how rodent movement studies are conducted. By providing scientists with sensitive, quantitative measurement methods, subjectivity and human error is reduced, potentially providing insights leading to breakthroughs in treating human disease.
Sheets, Alison L.; Lai, Po-Lun; Fisher, Lesley C.; Basso, D. Michele
2013-01-01
Thousands of scientists strive to identify cellular mechanisms that could lead to breakthroughs in developing ameliorative treatments for debilitating neural and muscular conditions such as spinal cord injury (SCI). Most studies use rodent models to test hypotheses, and these are all limited by the methods available to evaluate animal motor function. This study’s goal was to develop a behavioral and locomotor assessment system in a murine model of SCI that enables quantitative kinematic measurements to be made automatically in the open-field by applying markerless motion tracking approaches. Three-dimensional movements of eight naïve, five mild, five moderate, and four severe SCI mice were recorded using 10 cameras (100 Hz). Background subtraction was used in each video frame to identify the animal’s silhouette, and the 3D shape at each time was reconstructed using shape-from-silhouette. The reconstructed volume was divided into front and back halves using k-means clustering. The animal’s front Center of Volume (CoV) height and whole-body CoV speed were calculated and used to automatically classify animal behaviors including directed locomotion, exploratory locomotion, meandering, standing, and rearing. More detailed analyses of CoV height, speed, and lateral deviation during directed locomotion revealed behavioral differences and functional impairments in animals with mild, moderate, and severe SCI when compared with naïve animals. Naïve animals displayed the widest variety of behaviors including rearing and crossing the center of the open-field, the fastest speeds, and tallest rear CoV heights. SCI reduced the range of behaviors, and decreased speed (r = .70 p<.005) and rear CoV height (r = .65 p<.01) were significantly correlated with greater lesion size. This markerless tracking approach is a first step toward fundamentally changing how rodent movement studies are conducted. By providing scientists with sensitive, quantitative measurement methods, subjectivity and human error is reduced, potentially providing insights leading to breakthroughs in treating human disease. PMID:24058586
Zhou, Teng; Han, Guoqiang; Li, Bing Nan; Lin, Zhizhe; Ciaccio, Edward J; Green, Peter H; Qin, Jing
2017-06-01
Celiac disease is one of the most common diseases in the world. Capsule endoscopy is an alternative way to visualize the entire small intestine without invasiveness to the patient. It is useful to characterize celiac disease, but hours are need to manually analyze the retrospective data of a single patient. Computer-aided quantitative analysis by a deep learning method helps in alleviating the workload during analysis of the retrospective videos. Capsule endoscopy clips from 6 celiac disease patients and 5 controls were preprocessed for training. The frames with a large field of opaque extraluminal fluid or air bubbles were removed automatically by using a pre-selection algorithm. Then the frames were cropped and the intensity was corrected prior to frame rotation in the proposed new method. The GoogLeNet is trained with these frames. Then, the clips of capsule endoscopy from 5 additional celiac disease patients and 5 additional control patients are used for testing. The trained GoogLeNet was able to distinguish the frames from capsule endoscopy clips of celiac disease patients vs controls. Quantitative measurement with evaluation of the confidence was developed to assess the severity level of pathology in the subjects. Relying on the evaluation confidence, the GoogLeNet achieved 100% sensitivity and specificity for the testing set. The t-test confirmed the evaluation confidence is significant to distinguish celiac disease patients from controls. Furthermore, it is found that the evaluation confidence may also relate to the severity level of small bowel mucosal lesions. A deep convolutional neural network was established for quantitative measurement of the existence and degree of pathology throughout the small intestine, which may improve computer-aided clinical techniques to assess mucosal atrophy and other etiologies in real-time with videocapsule endoscopy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Precise nanoliter fluid handling system with integrated high-speed flow sensor.
Haber, Carsten; Boillat, Marc; van der Schoot, Bart
2005-04-01
A system for accurate low-volume delivery of liquids in the micro- to nanoliter range makes use of an integrated miniature flow sensor as part of an intelligent feedback control loop driving a micro-solenoid valve. The flow sensor is hydraulically connected with the pressurized system liquid in the dispensing channel and located downstream from the pressure source, above the solenoid valve. The sensor operates in a differential mode and responds in real-time to the internal flow-pulse resulting from the brief opening interval of the solenoid valve leading to a rapid ejection of a fluid droplet. The integral of the flow-pulse delivered by the sensor is directly proportional to the volume of the ejected droplet from the nozzle. The quantitative information is utilized to provide active control of the effectively dispensed or aspirated volume by adjusting the solenoid valve accordingly. This process significantly enhances the precision of the fluid delivery. The system furthermore compensates automatically for any changes in the viscosity of the dispensed liquid. The data delivered by the flow sensor can be saved and backtracked in order to confirm and validate the aspiration and dispensing process in its entirety. The collected dispense information can be used for quality control assessments and automatically be made part of an electronic record.
CALIPSO: an interactive image analysis software package for desktop PACS workstations
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Huang, H. K.
1990-07-01
The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade
IMPROVED BIOSAND FILTERS BY ENHANCED MONITORING AND DATA COLLECTION METHODS
The result of this project will be the development of a sensor that will automatically and accurately record BSF use, flow rate, and volume of water poured in. The quantitative volume and flow rate data output will be used to compare actual BSF frequency and usage patterns ...
La Methode Experimentale en Pedagogie (The Experimental Method in Pedagogy)
ERIC Educational Resources Information Center
Rouquette, Michel-Louis
1975-01-01
The pedagogue is caught between the qualitative and quantitative or regularized aspects of his work, a situation not automatically conducive to scientific study. The article refreshes the instructor on the elementary principles of experimentation: observation, systematization, elaboration of hypothesis, and startegies of comparison. (Text is in…
Shaimova, V A; Shaimov, T B; Shaimov, R B; Galin, A Yu; Goloshchapova, Zh A; Ryzhkov, P K; Fomin, A V
2018-01-01
To develop methods for evaluating effectiveness of YAG-laser vitreolysis of vitreous floaters. The study included 144 patients (173 eyes) who had underwent YAG-laser vitreolysis and were under observation from 01.09.16 to 31.01.18. The patients were 34 to 86 years old (mean age 62.7±10.2 years), 28 (19.4%) patients were male, 116 (80.6%) - female. All patients underwent standard and additional examination: ultrasonography (Accutome B-scan plus, U.S.A.), optic biometry (Lenstar 900, Haag-Streit, Switzerland), spectral optical coherence tomography using RTVue XR Avanti scanner (Optovue, U.S.A.) in modes Enhanced HD Line, 3D Retina, 3D Widefield MCT, Cross Line, Angio Retina, and scanning laser ophthalmoscopy (SLO) using Navilas 577s system. Laser vitreolysis was performed using the Ultra Q Reflex laser (Ellex, Australia). This paper presents methods of objective quantitative and qualitative assessment of artifactual shadows of vitreous floaters with spectral optical coherence tomographic scanner RTVue xR Avanti employing an algorithm of automatic detection of non-perfusion zones in modes Angio Retina, HD Angio Retina, as well as foveal avascular zone (FAZ) measurement with Angio Analytics® software. SLO performed with Navilas 577s was used as method of visualizing floaters and artifactual shadows in retinal surface layers prior to surgical treatment and after YAG-laser vitreolysis. Suggested methods of quantitative and qualitative assessment of artifactual shadows of the floaters in retinal layers are promising and may prove to be highly relevant for clinical monitoring of patients, optimization of treatment indications and evaluating effectiveness of YAG-laser vitreolysis. Further research of laser vitreolysis effectiveness in patients with vitreous floaters is necessary.
NASA Astrophysics Data System (ADS)
Köseoğlu, Denizcan; Belt, Simon T.; Smik, Lukas; Yao, Haoyi; Panieri, Giuliana; Knies, Jochen
2018-02-01
The discovery of IP25 as a qualitative biomarker proxy for Arctic sea ice and subsequent introduction of the so-called PIP25 index for semi-quantitative descriptions of sea ice conditions has significantly advanced our understanding of long-term paleo Arctic sea ice conditions over the past decade. We investigated the potential for classification tree (CT) models to provide a further approach to paleo Arctic sea ice reconstruction through analysis of a suite of highly branched isoprenoid (HBI) biomarkers in ca. 200 surface sediments from the Barents Sea. Four CT models constructed using different HBI assemblages revealed IP25 and an HBI triene as the most appropriate classifiers of sea ice conditions, achieving a >90% cross-validated classification rate. Additionally, lower model performance for locations in the Marginal Ice Zone (MIZ) highlighted difficulties in characterisation of this climatically-sensitive region. CT model classification and semi-quantitative PIP25-derived estimates of spring sea ice concentration (SpSIC) for four downcore records from the region were consistent, although agreement between proxy and satellite/observational records was weaker for a core from the west Svalbard margin, likely due to the highly variable sea ice conditions. The automatic selection of appropriate biomarkers for description of sea ice conditions, quantitative model assessment, and insensitivity to the c-factor used in the calculation of the PIP25 index are key attributes of the CT approach, and we provide an initial comparative assessment between these potentially complementary methods. The CT model should be capable of generating longer-term temporal shifts in sea ice conditions for the climatically sensitive Barents Sea.
Pectasides, Eirini; Rampias, Theodore; Kountourakis, Panteleimon; Sasaki, Clarence; Kowalski, Diane; Fountzilas, George; Zaramboukas, Thomas; Rimm, David; Burtness, Barbara; Psyrri, Amanda
2011-05-01
Epidermal growth factor receptor (EGFR) overexpression correlates with recurrence and with treatment resistance in head and neck squamous cell carcinoma (HNSCC). The aim of this study was to evaluate the relationship of EGFR gene copy number utilizing FISH and protein expression with automated quantitative analysis (AQUA) and to correlate those with patient outcome. A tissue microarray composed of 102 HNSCC treated with (chemo)radiation was constructed and analyzed for EGFR copy number by FISH (Vysis; Abbott Laboratories) and EGFR protein expression using AQUA analysis of EGFR staining scored on a scale of 0 to 255. We evaluated associations of EGFR FISH status and AQUA score with clinicopathologic parameters and survival prognosis. Eleven (17.2%) of 64 tumors with FISH results showed EGFR high polysomy and/or gene amplification (FISH positive). Protein levels assessed by AQUA in FISH-positive cases were significantly higher (P = 0.04) than in FISH-negative cases. Using the continuous AQUA scores for EGFR expression, AQUA and FISH showed significant agreement (Pearson's ρ = 0.353, P = 0.04). Patients with high tumor EGFR protein expression had inferior 5-year overall survival (27.7%) compared with those with low tumor EGFR expression (54%; P = 0.029). There was no significant association between EGFR FISH status and overall survival (P = 0.201). In the multivariate model, high tumor EGFR protein expression status remained an independent prognostic factor for overall survival (P = 0.047). EGFR protein content correlates with gene copy number if protein content is quantitated and automatically analyzed, as with AQUA. EGFR protein levels assessed by AQUA strongly predict for patient outcome in HNSCC, whereas EGFR FISH status does not provide prognostic information. ©2011 AACR.
Fiocco, Ugo; Stramare, Roberto; Martini, Veronica; Coran, Alessandro; Caso, Francesco; Costa, Luisa; Felicetti, Mara; Rizzo, Gaia; Tonietto, Matteo; Scanu, Anna; Oliviero, Francesca; Raffeiner, Bernd; Vezzù, Maristella; Lunardi, Francesca; Scarpa, Raffaele; Sacerdoti, David; Rubaltelli, Leopoldo; Punzi, Leonardo; Doria, Andrea; Grisan, Enrico
2017-02-01
To develop quantitative imaging biomarkers of synovial tissue perfusion by pixel-based contrast-enhanced ultrasound (CEUS), we studied the relationship between CEUS synovial vascular perfusion and the frequencies of pathogenic T helper (Th)-17 cells in psoriatic arthritis (PsA) joints. Eight consecutive patients with PsA were enrolled in this study. Gray scale CEUS evaluation was performed on the same joint immediately after joint aspiration, by automatic assessment perfusion data, using a new quantification approach of pixel-based analysis and the gamma-variate model. The set of perfusional parameters considered by the time intensity curve includes the maximum value (peak) of the signal intensity curve, the blood volume index or area under the curve, (BVI, AUC) and the contrast mean transit time (MTT). The direct ex vivo analysis of the frequencies of SF IL17A-F + CD161 + IL23 + CD4 + T cells subsets were quantified by fluorescence-activated cell sorter (FACS). In cross-sectional analyses, when tested for multiple comparison setting, a false discovery rate at 10%, a common pattern of correlations between CEUS Peak, AUC (BVI) and MTT parameters with the IL17A-F + IL23 + - IL17A-F + CD161 + - and IL17A-F + CD161 + IL23 + CD4 + T cells subsets, as well as lack of correlation between both peak and AUC values and both CD4 + T and CD4 + IL23 + T cells, was observed. The pixel-based CEUS assessment is a truly measure synovial inflammation, as a useful tool to develop quantitative imaging biomarker for monitoring target therapeutics in PsA.
Automatic Assessment of 3D Modeling Exams
ERIC Educational Resources Information Center
Sanna, A.; Lamberti, F.; Paravati, G.; Demartini, C.
2012-01-01
Computer-based assessment of exams provides teachers and students with two main benefits: fairness and effectiveness in the evaluation process. This paper proposes a fully automatic evaluation tool for the Graphic and Virtual Design (GVD) curriculum at the First School of Architecture of the Politecnico di Torino, Italy. In particular, the tool is…
Investigating Prompt Difficulty in an Automatically Scored Speaking Performance Assessment
ERIC Educational Resources Information Center
Cox, Troy L.
2013-01-01
Speaking assessments for second language learners have traditionally been expensive to administer because of the cost of rating the speech samples. To reduce the cost, many researchers are investigating the potential of using automatic speech recognition (ASR) as a means to score examinee responses to open-ended prompts. This study examined the…
Kim, Yong-Il; Im, Hyung-Jun; Paeng, Jin Chul; Lee, Jae Sung; Eo, Jae Seon; Kim, Dong Hyun; Kim, Euishin E; Kang, Keon Wook; Chung, June-Key; Lee, Dong Soo
2012-12-01
(18)F-FP-CIT positron emission tomography (PET) is an effective imaging for dopamine transporters. In usual clinical practice, (18)F-FP-CIT PET is analyzed visually or quantified using manual delineation of a volume of interest (VOI) for the striatum. In this study, we suggested and validated two simple quantitative methods based on automatic VOI delineation using statistical probabilistic anatomical mapping (SPAM) and isocontour margin setting. Seventy-five (18)F-FP-CIT PET images acquired in routine clinical practice were used for this study. A study-specific image template was made and the subject images were normalized to the template. Afterwards, uptakes in the striatal regions and cerebellum were quantified using probabilistic VOI based on SPAM. A quantitative parameter, QSPAM, was calculated to simulate binding potential. Additionally, the functional volume of each striatal region and its uptake were measured in automatically delineated VOI using isocontour margin setting. Uptake-volume product (QUVP) was calculated for each striatal region. QSPAM and QUVP were compared with visual grading and the influence of cerebral atrophy on the measurements was tested. Image analyses were successful in all the cases. Both the QSPAM and QUVP were significantly different according to visual grading (P < 0.001). The agreements of QUVP or QSPAM with visual grading were slight to fair for the caudate nucleus (κ = 0.421 and 0.291, respectively) and good to perfect to the putamen (κ = 0.663 and 0.607, respectively). Also, QSPAM and QUVP had a significant correlation with each other (P < 0.001). Cerebral atrophy made a significant difference in QSPAM and QUVP of the caudate nuclei regions with decreased (18)F-FP-CIT uptake. Simple quantitative measurements of QSPAM and QUVP showed acceptable agreement with visual grading. Although QSPAM in some group may be influenced by cerebral atrophy, these simple methods are expected to be effective in the quantitative analysis of (18)F-FP-CIT PET in usual clinical practice.
Mehmood, Irfan; Ejaz, Naveed; Sajjad, Muhammad; Baik, Sung Wook
2013-10-01
The objective of the present study is to explore prioritization methods in diagnostic imaging modalities to automatically determine the contents of medical images. In this paper, we propose an efficient prioritization of brain MRI. First, the visual perception of the radiologists is adapted to identify salient regions. Then this saliency information is used as an automatic label for accurate segmentation of brain lesion to determine the scientific value of that image. The qualitative and quantitative results prove that the rankings generated by the proposed method are closer to the rankings created by radiologists. Copyright © 2013 Elsevier Ltd. All rights reserved.
The surveillance state of behavioral automation
Schaefer, Andreas T; Claridge-Chang, Adam
2012-01-01
Genetics’ demand for increased throughput is driving automatization of behavior analysis far beyond experimental workhorses like circadian monitors and the operant conditioning box. However, the new automation is not just faster: it is also allowing new kinds of experiments, many of which erase the boundaries of the traditional neuroscience disciplines (psychology, ethology and physiology) while producing insight into problems that were otherwise opaque. Ironically, a central theme of current automatization is to improve observation of animals in increasingly naturalistic environments. This is not just a return to 19th century priorities: the new observational methods provide unprecedented quantitation of actions and ever-closer integration with experimentation. PMID:22119142
VASSAR: Value assessment of system architectures using rules
NASA Astrophysics Data System (ADS)
Selva, D.; Crawley, E. F.
A key step of the mission development process is the selection of a system architecture, i.e., the layout of the major high-level system design decisions. This step typically involves the identification of a set of candidate architectures and a cost-benefit analysis to compare them. Computational tools have been used in the past to bring rigor and consistency into this process. These tools can automatically generate architectures by enumerating different combinations of decisions and options. They can also evaluate these architectures by applying cost models and simplified performance models. Current performance models are purely quantitative tools that are best fit for the evaluation of the technical performance of mission design. However, assessing the relative merit of a system architecture is a much more holistic task than evaluating performance of a mission design. Indeed, the merit of a system architecture comes from satisfying a variety of stakeholder needs, some of which are easy to quantify, and some of which are harder to quantify (e.g., elegance, scientific value, political robustness, flexibility). Moreover, assessing the merit of a system architecture at these very early stages of design often requires dealing with a mix of: a) quantitative and semi-qualitative data; objective and subjective information. Current computational tools are poorly suited for these purposes. In this paper, we propose a general methodology that can used to assess the relative merit of several candidate system architectures under the presence of objective, subjective, quantitative, and qualitative stakeholder needs. The methodology called VASSAR (Value ASsessment for System Architectures using Rules). The major underlying assumption of the VASSAR methodology is that the merit of a system architecture can assessed by comparing the capabilities of the architecture with the stakeholder requirements. Hence for example, a candidate architecture that fully satisfies all critical sta- eholder requirements is a good architecture. The assessment process is thus fundamentally seen as a pattern matching process where capabilities match requirements, which motivates the use of rule-based expert systems (RBES). This paper describes the VASSAR methodology and shows how it can be applied to a large complex space system, namely an Earth observation satellite system. Companion papers show its applicability to the NASA space communications and navigation program and the joint NOAA-DoD NPOESS program.
High speed quantitative digital microscopy
NASA Technical Reports Server (NTRS)
Castleman, K. R.; Price, K. H.; Eskenazi, R.; Ovadya, M. M.; Navon, M. A.
1984-01-01
Modern digital image processing hardware makes possible quantitative analysis of microscope images at high speed. This paper describes an application to automatic screening for cervical cancer. The system uses twelve MC6809 microprocessors arranged in a pipeline multiprocessor configuration. Each processor executes one part of the algorithm on each cell image as it passes through the pipeline. Each processor communicates with its upstream and downstream neighbors via shared two-port memory. Thus no time is devoted to input-output operations as such. This configuration is expected to be at least ten times faster than previous systems.
A specialized plug-in software module for computer-aided quantitative measurement of medical images.
Wang, Q; Zeng, Y J; Huo, P; Hu, J L; Zhang, J H
2003-12-01
This paper presents a specialized system for quantitative measurement of medical images. Using Visual C++, we developed a computer-aided software based on Image-Pro Plus (IPP), a software development platform. When transferred to the hard disk of a computer by an MVPCI-V3A frame grabber, medical images can be automatically processed by our own IPP plug-in for immunohistochemical analysis, cytomorphological measurement and blood vessel segmentation. In 34 clinical studies, the system has shown its high stability, reliability and ease of utility.
System for Automatic Generation of Examination Papers in Discrete Mathematics
ERIC Educational Resources Information Center
Fridenfalk, Mikael
2013-01-01
A system was developed for automatic generation of problems and solutions for examinations in a university distance course in discrete mathematics and tested in a pilot experiment involving 200 students. Considering the success of such systems in the past, particularly including automatic assessment, it should not take long before such systems are…
Automatic limb identification and sleeping parameters assessment for pressure ulcer prevention.
Baran Pouyan, Maziyar; Birjandtalab, Javad; Nourani, Mehrdad; Matthew Pompeo, M D
2016-08-01
Pressure ulcers (PUs) are common among vulnerable patients such as elderly, bedridden and diabetic. PUs are very painful for patients and costly for hospitals and nursing homes. Assessment of sleeping parameters on at-risk limbs is critical for ulcer prevention. An effective assessment depends on automatic identification and tracking of at-risk limbs. An accurate limb identification can be used to analyze the pressure distribution and assess risk for each limb. In this paper, we propose a graph-based clustering approach to extract the body limbs from the pressure data collected by a commercial pressure map system. A robust signature-based technique is employed to automatically label each limb. Finally, an assessment technique is applied to evaluate the experienced stress by each limb over time. The experimental results indicate high performance and more than 94% average accuracy of the proposed approach. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Qin, Xulei; Cong, Zhibin; Halig, Luma V.; Fei, Baowei
2013-03-01
An automatic framework is proposed to segment right ventricle on ultrasound images. This method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform (SMT), a training model, and a localized region based level set. First, the sparse matrix transform extracts main motion regions of myocardium as eigenimages by analyzing statistical information of these images. Second, a training model of right ventricle is registered to the extracted eigenimages in order to automatically detect the main location of the right ventricle and the corresponding transform relationship between the training model and the SMT-extracted results in the series. Third, the training model is then adjusted as an adapted initialization for the segmentation of each image in the series. Finally, based on the adapted initializations, a localized region based level set algorithm is applied to segment both epicardial and endocardial boundaries of the right ventricle from the whole series. Experimental results from real subject data validated the performance of the proposed framework in segmenting right ventricle from echocardiography. The mean Dice scores for both epicardial and endocardial boundaries are 89.1%+/-2.3% and 83.6+/-7.3%, respectively. The automatic segmentation method based on sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.
Searching for biomarkers of CDKL5 disorder: early-onset visual impairment in CDKL5 mutant mice
Mazziotti, Raffaele; Lupori, Leonardo; Sagona, Giulia; Gennaro, Mariangela; Della Sala, Grazia; Putignano, Elena
2017-01-01
Abstract CDKL5 disorder is a neurodevelopmental disorder still without a cure. Murine models of CDKL5 disorder have been recently generated raising the possibility of preclinical testing of treatments. However, unbiased, quantitative biomarkers of high translational value to monitor brain function are still missing. Moreover, the analysis of treatment is hindered by the challenge of repeatedly and non-invasively testing neuronal function. We analyzed the development of visual responses in a mouse model of CDKL5 disorder to introduce visually evoked responses as a quantitative method to assess cortical circuit function. Cortical visual responses were assessed in CDKL5 null male mice, heterozygous females, and their respective control wild-type littermates by repeated transcranial optical imaging from P27 until P32. No difference between wild-type and mutant mice was present at P25-P26 whereas defective responses appeared from P27-P28 both in heterozygous and homozygous CDKL5 mutant mice. These results were confirmed by visually evoked potentials (VEPs) recorded from the visual cortex of a different cohort. The previously imaged mice were also analyzed at P60–80 using VEPs, revealing a persistent reduction of response amplitude, reduced visual acuity and defective contrast function. The level of adult impairment was significantly correlated with the reduction in visual responses observed during development. Support vector machine showed that multi-dimensional visual assessment can be used to automatically classify mutant and wt mice with high reliability. Thus, monitoring visual responses represents a promising biomarker for preclinical and clinical studies on CDKL5 disorder. PMID:28369421
Automatic affective appraisal of sexual penetration stimuli in women with vaginismus or dyspareunia.
Huijding, Jorg; Borg, Charmaine; Weijmar-Schultz, Willibrord; de Jong, Peter J
2011-03-01
Current psychological views are that negative appraisals of sexual stimuli lie at the core of sexual dysfunctions. It is important to differentiate between deliberate appraisals and more automatic appraisals, as research has shown that the former are most relevant to controllable behaviors, and the latter are most relevant to reflexive behaviors. Accordingly, it can be hypothesized that in women with vaginismus, the persistent difficulty to allow vaginal entry is due to global negative automatic affective appraisals that trigger reflexive pelvic floor muscle contraction at the prospect of penetration. To test whether sexual penetration pictures elicited global negative automatic affective appraisals in women with vaginismus or dyspareunia and to examine whether deliberate appraisals and automatic appraisals differed between the two patient groups. Women with persistent vaginismus (N = 24), dyspareunia (N = 23), or no sexual complaints (N = 30) completed a pictorial Extrinsic Affective Simon Task (EAST), and then made a global affective assessment of the EAST stimuli using visual analogue scales (VAS). The EAST assessed global automatic affective appraisals of sexual penetration stimuli, while the VAS assessed global deliberate affective appraisals of these stimuli. Automatic affective appraisals of sexual penetration stimuli tended to be positive, independent of the presence of sexual complaints. Deliberate appraisals of the same stimuli were significantly more negative in the women with vaginismus than in the dyspareunia group and control group, while the latter two groups did not differ in their appraisals. Unexpectedly, deliberate appraisals seemed to be most important in vaginismus, whereas dyspareunia did not seem to implicate negative deliberate or automatic affective appraisals. These findings dispute the view that global automatic affect lies at the core of vaginismus and indicate that a useful element in therapeutic interventions may be the modification of deliberate global affective appraisals of sexual penetration (e.g., via counter-conditioning). © 2010 International Society for Sexual Medicine.
Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin
2008-11-01
We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.
NASA Astrophysics Data System (ADS)
Adame, Isabel M.; van der Geest, Rob J.; Wasserman, Bruce A.; Mohamed, Mona; Reiber, Johan H. C.; Lelieveldt, Boudewijn P. F.
2004-05-01
Composition and structure of atherosclerotic plaque is a primary focus of cardiovascular research. In vivo MRI provides a meanse to non-invasively image and assess the morphological features of athersclerotic and normal human carotid arteries. To quantitatively assess the vulnerability and the type of plaque, the contours of the lumen, outer boundary of the vessel wall and plaque components, need to be traced. To achieve this goal, we have developed an automated contou detection technique, which consists of three consecutive steps: firstly, the outer boundary of the vessel wall is detected by means of an ellipse-fitting procedure in order to obtain smoothed shapes; secondly, the lumen is segnented using fuzzy clustering. Thre region to be classified is that within the outer vessel wall boundary obtained from the previous step; finally, for plaque detection we follow the same approach as for lumen segmentation: fuzzy clustering. However, plaque is more difficult to segment, as the pixel gray value can differ considerably from one region to another, even when it corresponds to the same type of tissue. That makes further processing necessary. All these three steps might be carried out combining information from different sequences (PD-, T2-, T1-weighted images, pre- and post-contrast), to improve the contour detection. The algorithm has been validated in vivo on 58 high-resolution PD and T1 weighted MR images (19 patients). The results demonstrate excellent correspondence between automatic and manual area measurements: lumen (r=0.94), outer (r=0.92), and acceptable for fibrous cap thickness (r=0.76).
Colorimetric calibration of wound photography with off-the-shelf devices
NASA Astrophysics Data System (ADS)
Bala, Subhankar; Sirazitdinova, Ekaterina; Deserno, Thomas M.
2017-03-01
Digital cameras are often used in recent days for photographic documentation in medical sciences. However, color reproducibility of same objects suffers from different illuminations and lighting conditions. This variation in color representation is problematic when the images are used for segmentation and measurements based on color thresholds. In this paper, motivated by photographic follow-up of chronic wounds, we assess the impact of (i) gamma correction, (ii) white balancing, (iii) background unification, and (iv) reference card-based color correction. Automatic gamma correction and white balancing are applied to support the calibration procedure, where gamma correction is a nonlinear color transform. For unevenly illuminated images, non- uniform illumination correction is applied. In the last step, we apply colorimetric calibration using a reference color card of 24 patches with known colors. A lattice detection algorithm is used for locating the card. The least squares algorithm is applied for affine color calibration in the RGB model. We have tested the algorithm on images with seven different types of illumination: with and without flash using three different off-the-shelf cameras including smartphones. We analyzed the spread of resulting color value of selected color patch before and after applying the calibration. Additionally, we checked the individual contribution of different steps of the whole calibration process. Using all steps, we were able to achieve a maximum of 81% reduction in standard deviation of color patch values in resulting images comparing to the original images. That supports manual as well as automatic quantitative wound assessments with off-the-shelf devices.
Atlas-based segmentation of 3D cerebral structures with competitive level sets and fuzzy control.
Ciofolo, Cybèle; Barillot, Christian
2009-06-01
We propose a novel approach for the simultaneous segmentation of multiple structures with competitive level sets driven by fuzzy control. To this end, several contours evolve simultaneously toward previously defined anatomical targets. A fuzzy decision system combines the a priori knowledge provided by an anatomical atlas with the intensity distribution of the image and the relative position of the contours. This combination automatically determines the directional term of the evolution equation of each level set. This leads to a local expansion or contraction of the contours, in order to match the boundaries of their respective targets. Two applications are presented: the segmentation of the brain hemispheres and the cerebellum, and the segmentation of deep internal structures. Experimental results on real magnetic resonance (MR) images are presented, quantitatively assessed and discussed.
NASA Astrophysics Data System (ADS)
Hadyan, Fadhlil; Shaufiah; Arif Bijaksana, Moch.
2017-01-01
Automatic summarization is a system that can help someone to take the core information of a long text instantly. The system can help by summarizing text automatically. there’s Already many summarization systems that have been developed at this time but there are still many problems in those system. In this final task proposed summarization method using document index graph. This method utilizes the PageRank and HITS formula used to assess the web page, adapted to make an assessment of words in the sentences in a text document. The expected outcome of this final task is a system that can do summarization of a single document, by utilizing document index graph with TextRank and HITS to improve the quality of the summary results automatically.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Wen-li; Wang, Hong-rui; Wang, Cheng
Due to rapid urbanization, waterlogging induced by torrential rainfall has become a global concern and a potential risk affecting urban habitant's safety. Widespread waterlogging disasters have occurred almost annually in the urban area of Beijing, the capital of China. Based on a self-organizing map (SOM) artificial neural network (ANN), a graded waterlogging risk assessment was conducted on 56 low-lying points in Beijing, China. Social risk factors, such as Gross domestic product (GDP), population density, and traffic congestion, were utilized as input datasets in this study. The results indicate that SOM-ANN is suitable for automatically and quantitatively assessing risks associated withmore » waterlogging. The greatest advantage of SOM-ANN in the assessment of waterlogging risk is that a priori knowledge about classification categories and assessment indicator weights is not needed. As a result, SOM-ANN can effectively overcome interference from subjective factors, producing classification results that are more objective and accurate. In this paper, the risk level of waterlogging in Beijing was divided into five grades. As a result, the points that were assigned risk grades of IV or V were located mainly in the districts of Chaoyang, Haidian, Xicheng, and Dongcheng.« less
Gandolla, Marta; Molteni, Franco; Ward, Nick S; Guanziroli, Eleonora; Ferrigno, Giancarlo; Pedrocchi, Alessandra
2015-11-01
The foreseen outcome of a rehabilitation treatment is a stable improvement on the functional outcomes, which can be longitudinally assessed through multiple measures to help clinicians in functional evaluation. In this study, we propose an automatic comprehensive method of combining multiple measures in order to assess a functional improvement. As test-bed, a functional electrical stimulation based treatment for foot drop correction performed with chronic post-stroke participants is presented. Patients were assessed on five relevant outcome measures before, after intervention, and at a follow-up time-point. A novel algorithm based on variables minimum detectable change is proposed and implemented in a custom-made software, combining the outcome measures to obtain a unique parameter: capacity score. The difference between capacity scores at different timing is three holded to obtain improvement evaluation. Ten clinicians evaluated patients on the Improvement Clinical Global Impression scale. Eleven patients underwent the treatment, and five resulted to achieve a stable functional improvement, as assessed by the proposed algorithm. A statistically significant agreement between intra-clinicians and algorithm-clinicians evaluations was demonstrated. The proposed method evaluates functional improvement on a single-subject yes/no base by merging different measures (e.g., kinematic, muscular) and it is validated against clinical evaluation.
Lai, Wen-li; Wang, Hong-rui; Wang, Cheng; ...
2017-05-05
Due to rapid urbanization, waterlogging induced by torrential rainfall has become a global concern and a potential risk affecting urban habitant's safety. Widespread waterlogging disasters have occurred almost annually in the urban area of Beijing, the capital of China. Based on a self-organizing map (SOM) artificial neural network (ANN), a graded waterlogging risk assessment was conducted on 56 low-lying points in Beijing, China. Social risk factors, such as Gross domestic product (GDP), population density, and traffic congestion, were utilized as input datasets in this study. The results indicate that SOM-ANN is suitable for automatically and quantitatively assessing risks associated withmore » waterlogging. The greatest advantage of SOM-ANN in the assessment of waterlogging risk is that a priori knowledge about classification categories and assessment indicator weights is not needed. As a result, SOM-ANN can effectively overcome interference from subjective factors, producing classification results that are more objective and accurate. In this paper, the risk level of waterlogging in Beijing was divided into five grades. As a result, the points that were assigned risk grades of IV or V were located mainly in the districts of Chaoyang, Haidian, Xicheng, and Dongcheng.« less
NASA Astrophysics Data System (ADS)
Davis, Paul B.; Abidi, Mongi A.
1989-05-01
PET is the only imaging modality that provides doctors with early analytic and quantitative biochemical assessment and precise localization of pathology. In PET images, boundary information as well as local pixel intensity are both crucial for manual and/or automated feature tracing, extraction, and identification. Unfortunately, the present PET technology does not provide the necessary image quality from which such precise analytic and quantitative measurements can be made. PET images suffer from significantly high levels of radial noise present in the form of streaks caused by the inexactness of the models used in image reconstruction. In this paper, our objective is to model PET noise and remove it without altering dominant features in the image. The ultimate goal here is to enhance these dominant features to allow for automatic computer interpretation and classification of PET images by developing techniques that take into consideration PET signal characteristics, data collection, and data reconstruction. We have modeled the noise steaks in PET images in both rectangular and polar representations and have shown both analytically and through computer simulation that it exhibits consistent mapping patterns. A class of filters was designed and applied successfully. Visual inspection of the filtered images show clear enhancement over the original images.
Giger, Maryellen L.; Chan, Heang-Ping; Boone, John
2008-01-01
The roles of physicists in medical imaging have expanded over the years, from the study of imaging systems (sources and detectors) and dose to the assessment of image quality and perception, the development of image processing techniques, and the development of image analysis methods to assist in detection and diagnosis. The latter is a natural extension of medical physicists’ goals in developing imaging techniques to help physicians acquire diagnostic information and improve clinical decisions. Studies indicate that radiologists do not detect all abnormalities on images that are visible on retrospective review, and they do not always correctly characterize abnormalities that are found. Since the 1950s, the potential use of computers had been considered for analysis of radiographic abnormalities. In the mid-1980s, however, medical physicists and radiologists began major research efforts for computer-aided detection or computer-aided diagnosis (CAD), that is, using the computer output as an aid to radiologists—as opposed to a completely automatic computer interpretation—focusing initially on methods for the detection of lesions on chest radiographs and mammograms. Since then, extensive investigations of computerized image analysis for detection or diagnosis of abnormalities in a variety of 2D and 3D medical images have been conducted. The growth of CAD over the past 20 years has been tremendous—from the early days of time-consuming film digitization and CPU-intensive computations on a limited number of cases to its current status in which developed CAD approaches are evaluated rigorously on large clinically relevant databases. CAD research by medical physicists includes many aspects—collecting relevant normal and pathological cases; developing computer algorithms appropriate for the medical interpretation task including those for segmentation, feature extraction, and classifier design; developing methodology for assessing CAD performance; validating the algorithms using appropriate cases to measure performance and robustness; conducting observer studies with which to evaluate radiologists in the diagnostic task without and with the use of the computer aid; and ultimately assessing performance with a clinical trial. Medical physicists also have an important role in quantitative imaging, by validating the quantitative integrity of scanners and developing imaging techniques, and image analysis tools that extract quantitative data in a more accurate and automated fashion. As imaging systems become more complex and the need for better quantitative information from images grows, the future includes the combined research efforts from physicists working in CAD with those working on quantitative imaging systems to readily yield information on morphology, function, molecular structure, and more—from animal imaging research to clinical patient care. A historical review of CAD and a discussion of challenges for the future are presented here, along with the extension to quantitative image analysis. PMID:19175137
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giger, Maryellen L.; Chan, Heang-Ping; Boone, John
2008-12-15
The roles of physicists in medical imaging have expanded over the years, from the study of imaging systems (sources and detectors) and dose to the assessment of image quality and perception, the development of image processing techniques, and the development of image analysis methods to assist in detection and diagnosis. The latter is a natural extension of medical physicists' goals in developing imaging techniques to help physicians acquire diagnostic information and improve clinical decisions. Studies indicate that radiologists do not detect all abnormalities on images that are visible on retrospective review, and they do not always correctly characterize abnormalities thatmore » are found. Since the 1950s, the potential use of computers had been considered for analysis of radiographic abnormalities. In the mid-1980s, however, medical physicists and radiologists began major research efforts for computer-aided detection or computer-aided diagnosis (CAD), that is, using the computer output as an aid to radiologists--as opposed to a completely automatic computer interpretation--focusing initially on methods for the detection of lesions on chest radiographs and mammograms. Since then, extensive investigations of computerized image analysis for detection or diagnosis of abnormalities in a variety of 2D and 3D medical images have been conducted. The growth of CAD over the past 20 years has been tremendous--from the early days of time-consuming film digitization and CPU-intensive computations on a limited number of cases to its current status in which developed CAD approaches are evaluated rigorously on large clinically relevant databases. CAD research by medical physicists includes many aspects--collecting relevant normal and pathological cases; developing computer algorithms appropriate for the medical interpretation task including those for segmentation, feature extraction, and classifier design; developing methodology for assessing CAD performance; validating the algorithms using appropriate cases to measure performance and robustness; conducting observer studies with which to evaluate radiologists in the diagnostic task without and with the use of the computer aid; and ultimately assessing performance with a clinical trial. Medical physicists also have an important role in quantitative imaging, by validating the quantitative integrity of scanners and developing imaging techniques, and image analysis tools that extract quantitative data in a more accurate and automated fashion. As imaging systems become more complex and the need for better quantitative information from images grows, the future includes the combined research efforts from physicists working in CAD with those working on quantitative imaging systems to readily yield information on morphology, function, molecular structure, and more--from animal imaging research to clinical patient care. A historical review of CAD and a discussion of challenges for the future are presented here, along with the extension to quantitative image analysis.« less
Automatic Assessment of Complex Task Performance in Games and Simulations. CRESST Report 775
ERIC Educational Resources Information Center
Iseli, Markus R.; Koenig, Alan D.; Lee, John J.; Wainess, Richard
2010-01-01
Assessment of complex task performance is crucial to evaluating personnel in critical job functions such as Navy damage control operations aboard ships. Games and simulations can be instrumental in this process, as they can present a broad range of complex scenarios without involving harm to people or property. However, "automatic"…
ERIC Educational Resources Information Center
Arendasy, Martin E.; Sommer, Markus
2012-01-01
The use of new test administration technologies such as computerized adaptive testing in high-stakes educational and occupational assessments demands large item pools. Classic item construction processes and previous approaches to automatic item generation faced the problems of a considerable loss of items after the item calibration phase. In this…
ERIC Educational Resources Information Center
Hill, Lindsay; Trusler, Karen; Furniss, Frederick; Lancioni, Giulio
2012-01-01
Background: The aim of the present study was to evaluate the effects of the sensory equipment provided in a multi-sensory environment (MSE) and the level of social contact provided on levels of stereotyped behaviours assessed as being maintained by automatic reinforcement. Method: Stereotyped and engaged behaviours of two young people with severe…
Training and subjective workload in a category search task
NASA Technical Reports Server (NTRS)
Vidulich, Michael A.; Pandit, Parimal
1986-01-01
This study examined automaticity as a means by which training influences mental workload. Two groups were trained in a category search task. One group received a training paradigm designed to promote the development of automaticity; the other group received a training paradigm designed to prohibit it. Resultant performance data showed the expected improvement as a result of the development of automaticity. Subjective workload assessments mirrored the performance results in most respects. The results supported the position that subjective mental workload assessments may be sensitive to the effect of training when it produces a lower level of cognitive load.
ERIC Educational Resources Information Center
Lorié, William A.
2013-01-01
A reverse engineering approach to automatic item generation (AIG) was applied to a figure-based publicly released test item from the Organisation for Economic Cooperation and Development (OECD) Programme for International Student Assessment (PISA) mathematical literacy cognitive instrument as part of a proof of concept. The author created an item…
Automatic detection of articulation disorders in children with cleft lip and palate.
Maier, Andreas; Hönig, Florian; Bocklet, Tobias; Nöth, Elmar; Stelzle, Florian; Nkenke, Emeka; Schuster, Maria
2009-11-01
Speech of children with cleft lip and palate (CLP) is sometimes still disordered even after adequate surgical and nonsurgical therapies. Such speech shows complex articulation disorders, which are usually assessed perceptually, consuming time and manpower. Hence, there is a need for an easy to apply and reliable automatic method. To create a reference for an automatic system, speech data of 58 children with CLP were assessed perceptually by experienced speech therapists for characteristic phonetic disorders at the phoneme level. The first part of the article aims to detect such characteristics by a semiautomatic procedure and the second to evaluate a fully automatic, thus simple, procedure. The methods are based on a combination of speech processing algorithms. The semiautomatic method achieves moderate to good agreement (kappa approximately 0.6) for the detection of all phonetic disorders. On a speaker level, significant correlations between the perceptual evaluation and the automatic system of 0.89 are obtained. The fully automatic system yields a correlation on the speaker level of 0.81 to the perceptual evaluation. This correlation is in the range of the inter-rater correlation of the listeners. The automatic speech evaluation is able to detect phonetic disorders at an experts'level without any additional human postprocessing.
Automatic short axis orientation of the left ventricle in 3D ultrasound recordings
NASA Astrophysics Data System (ADS)
Pedrosa, João.; Heyde, Brecht; Heeren, Laurens; Engvall, Jan; Zamorano, Jose; Papachristidis, Alexandros; Edvardsen, Thor; Claus, Piet; D'hooge, Jan
2016-04-01
The recent advent of three-dimensional echocardiography has led to an increased interest from the scientific community in left ventricle segmentation frameworks for cardiac volume and function assessment. An automatic orientation of the segmented left ventricular mesh is an important step to obtain a point-to-point correspondence between the mesh and the cardiac anatomy. Furthermore, this would allow for an automatic division of the left ventricle into the standard 17 segments and, thus, fully automatic per-segment analysis, e.g. regional strain assessment. In this work, a method for fully automatic short axis orientation of the segmented left ventricle is presented. The proposed framework aims at detecting the inferior right ventricular insertion point. 211 three-dimensional echocardiographic images were used to validate this framework by comparison to manual annotation of the inferior right ventricular insertion point. A mean unsigned error of 8, 05° +/- 18, 50° was found, whereas the mean signed error was 1, 09°. Large deviations between the manual and automatic annotations (> 30°) only occurred in 3, 79% of cases. The average computation time was 666ms in a non-optimized MATLAB environment, which potentiates real-time application. In conclusion, a successful automatic real-time method for orientation of the segmented left ventricle is proposed.
Using normalization 3D model for automatic clinical brain quantative analysis and evaluation
NASA Astrophysics Data System (ADS)
Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping
2003-05-01
Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.
Automatic humidification system to support the assessment of food drying processes
NASA Astrophysics Data System (ADS)
Ortiz Hernández, B. D.; Carreño Olejua, A. R.; Castellanos Olarte, J. M.
2016-07-01
This work shows the main features of an automatic humidification system to provide drying air that match environmental conditions of different climate zones. This conditioned air is then used to assess the drying process of different agro-industrial products at the Automation and Control for Agro-industrial Processes Laboratory of the Pontifical Bolivarian University of Bucaramanga, Colombia. The automatic system allows creating and improving control strategies to supply drying air under specified conditions of temperature and humidity. The development of automatic routines to control and acquire real time data was made possible by the use of robust control systems and suitable instrumentation. The signals are read and directed to a controller memory where they are scaled and transferred to a memory unit. Using the IP address is possible to access data to perform supervision tasks. One important characteristic of this automatic system is the Dynamic Data Exchange Server (DDE) to allow direct communication between the control unit and the computer used to build experimental curves.
Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images.
Pereira, Sergio; Pinto, Adriano; Alves, Victor; Silva, Carlos A
2016-05-01
Among brain tumors, gliomas are the most common and aggressive, leading to a very short life expectancy in their highest grade. Thus, treatment planning is a key stage to improve the quality of life of oncological patients. Magnetic resonance imaging (MRI) is a widely used imaging technique to assess these tumors, but the large amount of data produced by MRI prevents manual segmentation in a reasonable time, limiting the use of precise quantitative measurements in the clinical practice. So, automatic and reliable segmentation methods are required; however, the large spatial and structural variability among brain tumors make automatic segmentation a challenging problem. In this paper, we propose an automatic segmentation method based on Convolutional Neural Networks (CNN), exploring small 3 ×3 kernels. The use of small kernels allows designing a deeper architecture, besides having a positive effect against overfitting, given the fewer number of weights in the network. We also investigated the use of intensity normalization as a pre-processing step, which though not common in CNN-based segmentation methods, proved together with data augmentation to be very effective for brain tumor segmentation in MRI images. Our proposal was validated in the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013), obtaining simultaneously the first position for the complete, core, and enhancing regions in Dice Similarity Coefficient metric (0.88, 0.83, 0.77) for the Challenge data set. Also, it obtained the overall first position by the online evaluation platform. We also participated in the on-site BRATS 2015 Challenge using the same model, obtaining the second place, with Dice Similarity Coefficient metric of 0.78, 0.65, and 0.75 for the complete, core, and enhancing regions, respectively.
Heussel, C P; Herth, F J F; Kappes, J; Hantusch, R; Hartlieb, S; Weinheimer, O; Kauczor, H U; Eberhardt, R
2009-10-01
Characterisation and quantification of emphysema are necessary for planning of local treatment and monitoring. Sensitive, easy to measure, and stable parameters have to be established and their relation to the well-known pulmonary function testing (PFT) has to be investigated. A retrospective analysis of 221 nonenhanced thin-section MDCT with a corresponding PFT was carried out, with a subgroup analysis in 102 COPD stage III+IV, 44 COPD stage 0, and 33 investigations into interstitial lung disease (ILD). The in-house YACTA software was used for automatic quantification of lung and emphysema volume [l], emphysema index, mean lung density (MLD [HU]) and 15(th) percentile [HU]. CT-derived lung volume is significantly smaller in ILD (3.8) and larger in COPD (7.2) than in controls (5.9, p < 0.0001). Emphysema volume and index are significantly higher in COPD than in controls (3.2 vs. 0.5, p < 0.0001, 45% vs. 8%, p < 0.0001). MLD and 15(th) percentile are significantly smaller in COPD (-877/-985, p < 0.0001) and significantly higher in ILD (-777, p < 0.0006/-914, p < 0.0001) than in controls (-829/-935). A relevant amount of COPD patients apparently do not suffer from emphysema, while controls who do not fulfil PFT criteria for COPD also demonstrate CT features of emphysema. Automatic quantification of thin-section CT delivers convincing parameters and ranges that are able to differentiate among emphysema, control and ILD. An emphysema index of lower 20%, MLD higher than -850, and 15(th) percentile lower than -950 might be regarded as normal (thin-section, nonenhanced, B40, YACTA). These ranges might be helpful in the judgement of individual measures.
Condensed Representation of Sentences in Graphic Displays of Text Structures.
ERIC Educational Resources Information Center
Craven, Timothy C.
1990-01-01
Discusses ways in which sentences may be represented in a condensed form in graphic displays of a sentence dependency structure. A prototype of a text structure management system, TEXNET, is described; a quantitative evaluation of automatic abbreviation schemes is presented; full-text compression is discussed; and additional research is suggested.…
ON THE QUANTITATIVE EVALUATION OF THE TERMINOLOGY OF A VOCABULARY.
ERIC Educational Resources Information Center
KRAVETS, L.G.
THE CREATION OF AN INDUSTRIAL SYSTEM OF MACHINE TRANSLATION WITH AUTOMATIC INDEXING OF THE TRANSLATED MATERIALS PRESUMES THE DEVELOPMENT OF DICTIONARIES WHICH PROVIDE FOR THE IDENTIFICATION OF KEY WORDS AND WORD COMBINATIONS, FOLLOWED BY THEIR TRANSLATION INTO THE DESCRIPTORS OF THE SEARCH LANGUAGE. THREE SIGNS WHICH SHOW THAT A GIVEN WORD IS A…
Sánchez Gómez, Serafín; Ostos, Elisa María Cabot; Solano, Juan Manuel Maza; Salado, Tomás Francisco Herrero
2013-05-06
We evaluated a newly designed electronic portfolio (e-Portfolio) that provided quantitative evaluation of surgical skills. Medical students at the University of Seville used the e-Portfolio on a voluntary basis for evaluation of their performance in undergraduate surgical subjects. Our new web-based e-Portfolio was designed to evaluate surgical practical knowledge and skills targets. Students recorded each activity on a form, attached evidence, and added their reflections. Students self-assessed their practical knowledge using qualitative criteria (yes/no), and graded their skills according to complexity (basic/advanced) and participation (observer/assistant/independent). A numerical value was assigned to each activity, and the values of all activities were summated to obtain the total score. The application automatically displayed quantitative feedback. We performed qualitative evaluation of the perceived usefulness of the e-Portfolio and quantitative evaluation of the targets achieved. Thirty-seven of 112 students (33%) used the e-Portfolio, of which 87% reported that they understood the methodology of the portfolio. All students reported an improved understanding of their learning objectives resulting from the numerical visualization of progress, all students reported that the quantitative feedback encouraged their learning, and 79% of students felt that their teachers were more available because they were using the e-Portfolio. Only 51.3% of students reported that the reflective aspects of learning were useful. Individual students achieved a maximum of 65% of the total targets and 87% of the skills targets. The mean total score was 345 ± 38 points. For basic skills, 92% of students achieved the maximum score for participation as an independent operator, and all achieved the maximum scores for participation as an observer and assistant. For complex skills, 62% of students achieved the maximum score for participation as an independent operator, and 98% achieved the maximum scores for participation as an observer or assistant. Medical students reported that use of an electronic portfolio that provided quantitative feedback on their progress was useful when the number and complexity of targets were appropriate, but not when the portfolio offered only formative evaluations based on reflection. Students felt that use of the e-Portfolio guided their learning process by indicating knowledge gaps to themselves and teachers.
Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina
2016-05-01
Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.
Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection
Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin
2014-01-01
Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479
Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W; Gautier, Virginie W
2015-01-01
We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip.
Ou, Yan; Niu, Xiao-lin; Ren, Fu-xian
2010-09-01
The objective of this study was to investigate the molecular basis of the inferior nodal extension (INE) in the atrioventricular junctional area that accounts for arrhythmias. The INE was separated from the adult rat heart by laser capture microdissection. The mRNA expression of ion channels was detected by quantitative real-time PCR. Hierarchical clustering was used to demonstrate clustering of expression of genes in sections. The mRNA expression of HCN4, Ca(v)3.1 and Ca(v)3.2 was high in the INE, atrioventricular node and sino-atrial node, and that of Ca(v)3.2 high in Purkinje fibres. Although the expression of HCN1 and Ca(v)1.3 was low in the rat heart, it was relatively higher in the INE, atrioventricular node and sino-atrial node than in right atrial and right ventricular (working) myocytes. Both HCN2 and Ca(v)1.2 were expressed at higher levels in working myocytes than in nodal tissues and in the INE. Hierarchical clustering analysis demonstrated that the expression of the HCN and calcium channels in INE was similar to that in the slow-response automatic cells and different from that in working myocytes and Purkinje fibres. The expression of HCN and calcium channels in the INE of the adult rat heart is similar to that of slow-response automatic cells and provides a substrate for automatic phase 4 depolarization in cells.
Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W.; Gautier, Virginie W.
2015-01-01
We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip. PMID:26485569
Algorithm for automatic forced spirometry quality assessment: technological developments.
Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere
2014-01-01
We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community.
Speech systems research at Texas Instruments
NASA Technical Reports Server (NTRS)
Doddington, George R.
1977-01-01
An assessment of automatic speech processing technology is presented. Fundamental problems in the development and the deployment of automatic speech processing systems are defined and a technology forecast for speech systems is presented.
Denoising and 4D visualization of OCT images
Gargesha, Madhusudhana; Jenkins, Michael W.; Rollins, Andrew M.; Wilson, David L.
2009-01-01
We are using Optical Coherence Tomography (OCT) to image structure and function of the developing embryonic heart in avian models. Fast OCT imaging produces very large 3D (2D + time) and 4D (3D volumes + time) data sets, which greatly challenge ones ability to visualize results. Noise in OCT images poses additional challenges. We created an algorithm with a quick, data set specific optimization for reduction of both shot and speckle noise and applied it to 3D visualization and image segmentation in OCT. When compared to baseline algorithms (median, Wiener, orthogonal wavelet, basic non-orthogonal wavelet), a panel of experts judged the new algorithm to give much improved volume renderings concerning both noise and 3D visualization. Specifically, the algorithm provided a better visualization of the myocardial and endocardial surfaces, and the interaction of the embryonic heart tube with surrounding tissue. Quantitative evaluation using an image quality figure of merit also indicated superiority of the new algorithm. Noise reduction aided semi-automatic 2D image segmentation, as quantitatively evaluated using a contour distance measure with respect to an expert segmented contour. In conclusion, the noise reduction algorithm should be quite useful for visualization and quantitative measurements (e.g., heart volume, stroke volume, contraction velocity, etc.) in OCT embryo images. With its semi-automatic, data set specific optimization, we believe that the algorithm can be applied to OCT images from other applications. PMID:18679509
Automatic registration of ICG images using mutual information and perfusion analysis
NASA Astrophysics Data System (ADS)
Kim, Namkug; Seo, Jong-Mo; Lee, June-goo; Kim, Jong Hyo; Park, Kwangsuk; Yu, Hyeong-Gon; Yu, Young Suk; Chung, Hum
2005-04-01
Introduction: Indocyanin green fundus angiographic images (ICGA) of the eyes is useful method in detecting and characterizing the choroidal neovascularization (CNV), which is the major cause of the blindness over 65 years of age. To investigate the quantitative analysis of the blood flow on ICGA, systematic approach for automatic registration of using mutual information and a quantitative analysis was developed. Methods: Intermittent sequential images of indocyanin green angiography were acquired by Heidelberg retinal angiography that uses the laser scanning system for the image acquisition. Misalignment of the each image generated by the minute eye movement of the patients was corrected by the mutual information method because the distribution of the contrast media on image is changing throughout the time sequences. Several region of interest (ROI) were selected by a physician and the intensities of the selected region were plotted according to the time sequences. Results: The registration of ICGA time sequential images is required not only translate transform but also rotational transform. Signal intensities showed variation based on gamma-variate function depending on ROIs and capillary vessels show more variance of signal intensity than major vessels. CNV showed intermediate variance of signal intensity and prolonged transit time. Conclusion: The resulting registered images can be used not only for quantitative analysis, but also for perfusion analysis. Various investigative approached on CNV using this method will be helpful in the characterization of the lesion and follow-up.
ERIC Educational Resources Information Center
Bey, Anis; Jermann, Patrick; Dillenbourg, Pierre
2018-01-01
Computer-graders have been in regular use in the context of MOOCs (Massive Open Online Courses). The automatic grading of programs presents an opportunity to assess and provide tailored feedback to large classes, while featuring at the same time a number of benefits like: immediate feedback, unlimited submissions, as well as low cost of feedback.…
NASA Astrophysics Data System (ADS)
Möller, Thomas; Bellin, Knut; Creutzburg, Reiner
2015-03-01
The aim of this paper is to show the recent progress in the design and prototypical development of a software suite Copra Breeder* for semi-automatic generation of test methodologies and security checklists for IT vulnerability assessment in small and medium-sized enterprises.
ERIC Educational Resources Information Center
Gutl, Christian; Lankmayr, Klaus; Weinhofer, Joachim; Hofler, Margit
2011-01-01
Research in automated creation of test items for assessment purposes became increasingly important during the recent years. Due to automatic question creation it is possible to support personalized and self-directed learning activities by preparing appropriate and individualized test items quite easily with relatively little effort or even fully…
ERIC Educational Resources Information Center
Rellecke, Julian; Palazova, Marina; Sommer, Werner; Schacht, Annekathrin
2011-01-01
The degree to which emotional aspects of stimuli are processed automatically is controversial. Here, we assessed the automatic elicitation of emotion-related brain potentials (ERPs) to positive, negative, and neutral words and facial expressions in an easy and superficial face-word discrimination task, for which the emotional valence was…
Fine grained recognition of masonry walls for built heritage assessment
NASA Astrophysics Data System (ADS)
Oses, N.; Dornaika, F.; Moujahid, A.
2015-01-01
This paper presents the ground work carried out to achieve automatic fine grained recognition of stone masonry. This is a necessary first step in the development of the analysis tool. The built heritage that will be assessed consists of stone masonry constructions and many of the features analysed can be characterized according to the geometry and arrangement of the stones. Much of the assessment is carried out through visual inspection. Thus, we apply image processing on digital images of the elements under inspection. The main contribution of the paper is the performance evaluation of the automatic categorization of masonry walls from a set of extracted straight line segments. The element chosen to perform this evaluation is the stone arrangement of masonry walls. The validity of the proposed framework is assessed on real images of masonry walls using machine learning paradigms. These include classifiers as well as automatic feature selection.
On the Relationship Between Automatic Attitudes and Self-Reported Sexual Assault in Men
Widman, Laura; Olson, Michael
2013-01-01
Research and theory suggest rape supportive attitudes are important predictors of sexual assault; yet, to date, rape supportive attitudes have been assessed exclusively through self-report measures that are methodologically and theoretically limited. To address these limitations, the objectives of the current project were to: (1) develop a novel implicit rape attitude assessment that captures automatic attitudes about rape and does not rely on self-reports, and (2) examine the association between automatic rape attitudes and sexual assault perpetration. We predicted that automatic rape attitudes would be a significant unique predictor of sexual assault even when self-reported rape attitudes (i.e., rape myth acceptance and hostility toward women) were controlled. We tested the generalizability of this prediction in two independent samples: a sample of undergraduate college men (n = 75, M age = 19.3 years) and a sample of men from the community (n = 50, M age = 35.9 years). We found the novel implicit rape attitude assessment was significantly associated with the frequency of sexual assault perpetration in both samples and contributed unique variance in explaining sexual assault beyond rape myth acceptance and hostility toward women. We discuss the ways in which future research on automatic rape attitudes may significantly advance measurement and theory aimed at understanding and preventing sexual assault. PMID:22618119
Using a MaxEnt Classifier for the Automatic Content Scoring of Free-Text Responses
NASA Astrophysics Data System (ADS)
Sukkarieh, Jana Z.
2011-03-01
Criticisms against multiple-choice item assessments in the USA have prompted researchers and organizations to move towards constructed-response (free-text) items. Constructed-response (CR) items pose many challenges to the education community—one of which is that they are expensive to score by humans. At the same time, there has been widespread movement towards computer-based assessment and hence, assessment organizations are competing to develop automatic content scoring engines for such items types—which we view as a textual entailment task. This paper describes how MaxEnt Modeling is used to help solve the task. MaxEnt has been used in many natural language tasks but this is the first application of the MaxEnt approach to textual entailment and automatic content scoring.
Tsugawa, Hiroshi; Arita, Masanori; Kanazawa, Mitsuhiro; Ogiwara, Atsushi; Bamba, Takeshi; Fukusaki, Eiichiro
2013-05-21
We developed a new software program, MRMPROBS, for widely targeted metabolomics by using the large-scale multiple reaction monitoring (MRM) mode. The strategy became increasingly popular for the simultaneous analysis of up to several hundred metabolites at high sensitivity, selectivity, and quantitative capability. However, the traditional method of assessing measured metabolomics data without probabilistic criteria is not only time-consuming but is often subjective and makeshift work. Our program overcomes these problems by detecting and identifying metabolites automatically, by separating isomeric metabolites, and by removing background noise using a probabilistic score defined as the odds ratio from an optimized multivariate logistic regression model. Our software program also provides a user-friendly graphical interface to curate and organize data matrices and to apply principal component analyses and statistical tests. For a demonstration, we conducted a widely targeted metabolome analysis (152 metabolites) of propagating Saccharomyces cerevisiae measured at 15 time points by gas and liquid chromatography coupled to triple quadrupole mass spectrometry. MRMPROBS is a useful and practical tool for the assessment of large-scale MRM data available to any instrument or any experimental condition.
Ship collision risk assessment for the Singapore Strait.
Qu, Xiaobo; Meng, Qiang; Suyi, Li
2011-11-01
The Singapore Strait is considered as the bottleneck and chokepoint of the shipping routes connecting the Indian and the Pacific Ocean. Therefore, the ship collision risk assessment is of significant importance for ships passing through the narrow, shallow, and busy waterway. In this paper, three ship collision risk indices are initially proposed to quantitatively assess the ship collision risks in the Strait: index of speed dispersion, degree of acceleration and deceleration, and number of fuzzy ship domain overlaps. These three risk indices for the Singapore Strait are estimated by using the real-time ship locations and sailing speeds provide by Lloyd's MIU automatic identification system (AIS). Based on estimation of these three risk indices, it can be concluded that Legs 4W, 5W, 11E, and 12E are the most risky legs in the Strait. Therefore, the ship collision risk reduction solutions should be prioritized being implemented in these four legs. This study also finds that around 25% of the vessels sail with a speed in excess of the speed limit, which results in higher potentials of ship collision. Analysis indicates that the safety level would be significantly improved if all the vessels follow the passage guidelines. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sugiyama, S.; Hong, Y.-J.; Kasaragod, D.; Makita, S.; Miura, M.; Ikuno, Y.; Yasuno, Y.
2016-03-01
Quantitative evaluation of optical properties of choroid and sclera are performed by multifunctional optical coherence tomography. Five normal eyes, five glaucoma eyes and one choroidal atrophy eye are examined. The refractive error was found to be correlated with choroidal birefringence, polarization uniformity, and flow in addition to scleral birefringence among normal eyes. The significant differences were observed between the normal and the glaucoma eyes, as for choroidal polarization uniformity, flow and scleral birefringence. An automatic segmentation algorithm of retinal pigment epithelium and chorioscleral interface based on multifunctional signals is also presented.
Automated Assessment of Child Vocalization Development Using LENA.
Richards, Jeffrey A; Xu, Dongxin; Gilkerson, Jill; Yapanel, Umit; Gray, Sharmistha; Paul, Terrance
2017-07-12
To produce a novel, efficient measure of children's expressive vocal development on the basis of automatic vocalization assessment (AVA), child vocalizations were automatically identified and extracted from audio recordings using Language Environment Analysis (LENA) System technology. Assessment was based on full-day audio recordings collected in a child's unrestricted, natural language environment. AVA estimates were derived using automatic speech recognition modeling techniques to categorize and quantify the sounds in child vocalizations (e.g., protophones and phonemes). These were expressed as phone and biphone frequencies, reduced to principal components, and inputted to age-based multiple linear regression models to predict independently collected criterion-expressive language scores. From these models, we generated vocal development AVA estimates as age-standardized scores and development age estimates. AVA estimates demonstrated strong statistical reliability and validity when compared with standard criterion expressive language assessments. Automated analysis of child vocalizations extracted from full-day recordings in natural settings offers a novel and efficient means to assess children's expressive vocal development. More research remains to identify specific mechanisms of operation.
A Risk Assessment System with Automatic Extraction of Event Types
NASA Astrophysics Data System (ADS)
Capet, Philippe; Delavallade, Thomas; Nakamura, Takuya; Sandor, Agnes; Tarsitano, Cedric; Voyatzi, Stavroula
In this article we describe the joint effort of experts in linguistics, information extraction and risk assessment to integrate EventSpotter, an automatic event extraction engine, into ADAC, an automated early warning system. By detecting as early as possible weak signals of emerging risks ADAC provides a dynamic synthetic picture of situations involving risk. The ADAC system calculates risk on the basis of fuzzy logic rules operated on a template graph whose leaves are event types. EventSpotter is based on a general purpose natural language dependency parser, XIP, enhanced with domain-specific lexical resources (Lexicon-Grammar). Its role is to automatically feed the leaves with input data.
Fedorov, Andriy; Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R
2016-01-01
Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM(®)) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard.
Momose, Mitsuhiro; Takaki, Akihiro; Matsushita, Tsuyoshi; Yanagisawa, Shin; Yano, Kesato; Miyasaka, Tadashi; Ogura, Yuka; Kadoya, Masumi
2011-01-01
AQCEL enables automatic reconstruction of single-photon emission computed tomogram (SPECT) without image degradation and quantitative analysis of cerebral blood flow (CBF) after the input of simple parameters. We ascertained the usefulness and quality of images obtained by the application software AQCEL in clinical practice. Twelve patients underwent brain perfusion SPECT using technetium-99m ethyl cysteinate dimer at rest and after acetazolamide (ACZ) loading. Images reconstructed using AQCEL were compared with those reconstructed using conventional filtered back projection (FBP) method for qualitative estimation. Two experienced nuclear medicine physicians interpreted the image quality using the following visual scores: 0, same; 1, slightly superior; 2, superior. For quantitative estimation, the mean CBF values of the normal hemisphere of the 12 patients using ACZ calculated by the AQCEL method were compared with those calculated by the conventional method. The CBF values of the 24 regions of the 3-dimensional stereotaxic region of interest template (3DSRT) calculated by the AQCEL method at rest and after ACZ loading were compared to those calculated by the conventional method. No significant qualitative difference was observed between the AQCEL and conventional FBP methods in the rest study. The average score by the AQCEL method was 0.25 ± 0.45 and that by the conventional method was 0.17 ± 0.39 (P = 0.34). There was a significant qualitative difference between the AQCEL and conventional methods in the ACZ loading study. The average score for AQCEL was 0.83 ± 0.58 and that for the conventional method was 0.08 ± 0.29 (P = 0.003). During quantitative estimation using ACZ, the mean CBF values of 12 patients calculated by the AQCEL method were 3-8% higher than those calculated by the conventional method. The square of the correlation coefficient between these methods was 0.995. While comparing the 24 3DSRT regions of 12 patients, the squares of the correlation coefficient between AQCEL and conventional methods were 0.973 and 0.986 for the normal and affected sides at rest, respectively, and 0.977 and 0.984 for the normal and affected sides after ACZ loading, respectively. The quality of images reconstructed using the application software AQCEL were superior to that obtained using conventional method after ACZ loading, and high correlations were shown in quantity at rest and after ACZ loading. This software can be applied to clinical practice and is a useful tool for improvement of reproducibility and throughput.
Automatic tissue image segmentation based on image processing and deep learning
NASA Astrophysics Data System (ADS)
Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting
2018-02-01
Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies or other novel imaging technologies. Plus, image segmentation also provides detailed structure description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation method. Here we used image enhancement, operators, and morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in a deep learning way. We also introduced parallel computing. Such approaches greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. Our results can be used as a criteria when diagnosing diseases such as cerebral atrophy, which is caused by pathological changes in gray matter or white matter. We demonstrated the great potential of such image processing and deep leaning combined automatic tissue image segmentation in personalized medicine, especially in monitoring, and treatments.
Analysis and Comparison of Some Automatic Vehicle Monitoring Systems
DOT National Transportation Integrated Search
1973-07-01
In 1970 UMTA solicited proposals and selected four companies to develop systems to demonstrate the feasibility of different automatic vehicle monitoring techniques. The demonstrations culminated in experiments in Philadelphia to assess the performanc...
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.; Hines, Glenn D.
2004-01-01
Noise is the primary visibility limit in the process of non-linear image enhancement, and is no longer a statistically stable additive noise in the post-enhancement image. Therefore novel approaches are needed to both assess and reduce spatially variable noise at this stage in overall image processing. Here we will examine the use of edge pattern analysis both for automatic assessment of spatially variable noise and as a foundation for new noise reduction methods.
Cavailloles, F; Bazin, J P; Capderou, A; Valette, H; Herbert, J L; Di Paola, R
1987-05-01
A method for automatic processing of cardiac first-pass radionuclide study is presented. This technique, factor analysis of dynamic structures (FADS) provides an automatic separation of anatomical structures according to their different temporal behaviour, even if they are superimposed. FADS has been applied to 76 studies. A description of factor patterns obtained in various pathological categories is presented. FADS provides easy diagnosis of shunts and tricuspid insufficiency. Quantitative information derived from the factors (cardiac output and mean transit time) were compared to those obtained by the region of interest method. Using FADS, a higher correlation with cardiac catheterization was found for cardiac output calculation. Thus compared to the ROI method, FADS presents obvious advantages: a good separation of overlapping cardiac chambers is obtained; this operator independant method provides more objective and reproducible results. A number of parameters of the cardio-pulmonary function can be assessed by first-pass radionuclide angiocardiography (RNA) [1,2]. Usually, they are calculated using time-activity curves (TAC) from regions of interest (ROI) drawn on the cardiac chambers and the lungs. This method has two main drawbacks: (1) the lack of inter and intra-observers reproducibility; (2) the problem of crosstalk which affects the evaluation of the cardio-pulmonary performance. The crosstalk on planar imaging is due to anatomical superimposition of the cardiac chambers and lungs. The activity measured in any ROI is the sum of the activity in several organs and 'decontamination' of the TAC cannot easily be performed using the ROI method [3]. Factor analysis of dynamic structures (FADS) [4,5] can solve the two problems mentioned above. It provides an automatic separation of anatomical structures according to their different temporal behaviour, even if they are superimposed. The resulting factors are estimates of the time evolution of the activity in each structure (underlying physiological components), and the associated factor images are estimates of the spatial distribution of each factor. The aim of this study was to assess the reliability of FADS in first pass RNA and compare the results to those obtained by the ROI method which is generally considered as the routine procedure.
Automatic stereotyping against people with schizophrenia, schizoaffective and affective disorders
Rüsch, Nicolas; Corrigan, Patrick W.; Todd, Andrew R.; Bodenhausen, Galen V.
2010-01-01
Similar to members of the public, people with mental illness may exhibit general negative automatic prejudice against their own group. However, it is unclear whether more specific negative stereotypes are automatically activated among diagnosed individuals and how such automatic stereotyping may be related to self-reported attitudes and emotional reactions. We therefore studied automatically activated reactions toward mental illness among 85 people with schizophrenia, schizoaffective or affective disorders as well as among 50 members of the general public, using a Lexical Decision Task to measure automatic stereotyping. Deliberately endorsed attitudes and emotional reactions were assessed by self-report. Independent of diagnosis, people with mental illness showed less negative automatic stereotyping than did members of the public. Among members of the public, stronger automatic stereotyping was associated with more self-reported shame about a potential mental illness and more anger toward stigmatized individuals. Reduced automatic stereotyping in the diagnosed group suggests that people with mental illness might not entirely internalize societal stigma. Among members of the public, automatic stereotyping predicted negative emotional reactions to people with mental illness. Initiatives to reduce the impact of public stigma and internalized stigma should take automatic stereotyping and related emotional aspects of stigma into account. PMID:20843560
Computerized Interpretation of Dynamic Breast MRI
2006-05-01
correction, tumor segmentation , extraction of computerized features that help distinguish between benign and malignant lesions, and classification. Our...for assessing tumor extent in 3D. The primary feature used for 3D tumor segmentation is the postcontrast enhancement vector. Tumor segmentation is a...Appendix B. 4. Investigation of methods for automatic tumor segmentation We developed an automatic method for assessing tumor extent in 3D. The
Automatic Residential/Commercial Classification of Parcels with Solar Panel Detections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morton, April M; Omitaomu, Olufemi A; Kotikot, Susan
A computational method to automatically detect solar panels on rooftops to aid policy and financial assessment of solar distributed generation. The code automatically classifies parcels containing solar panels in the U.S. as residential or commercial. The code allows the user to specify an input dataset containing parcels and detected solar panels, and then uses information about the parcels and solar panels to automatically classify the rooftops as residential or commercial using machine learning techniques. The zip file containing the code includes sample input and output datasets for the Boston and DC areas.
ERIC Educational Resources Information Center
Lee, Cynthia; Wong, Kelvin C. K.; Cheung, William K.; Lee, Fion S. L.
2009-01-01
The paper first describes a web-based essay critiquing system developed by the authors using latent semantic analysis (LSA), an automatic text analysis technique, to provide students with immediate feedback on content and organisation for revision whenever there is an internet connection. It reports on its effectiveness in enhancing adult EFL…
ERIC Educational Resources Information Center
Nobile, Maria; Perego, Paolo; Piccinini, Luigi; Mani, Elisa; Rossi, Agnese; Bellina, Monica; Molteni, Massimo
2011-01-01
In order to increase the knowledge of locomotor disturbances in children with autism, and of the mechanism underlying them, the objective of this exploratory study was to reliably and quantitatively evaluate linear gait parameters (spatio-temporal and kinematic parameters), upper body kinematic parameters, walk orientation and smoothness using an…
Development of method for quantifying essential tremor using a small optical device.
Chen, Kai-Hsiang; Lin, Po-Chieh; Chen, Yu-Jung; Yang, Bing-Shiang; Lin, Chin-Hsien
2016-06-15
Clinical assessment scales are the most common means used by physicians to assess tremor severity. Some scientific tools that may be able to replace these scales to objectively assess the severity, such as accelerometers, digital tablets, electromyography (EMG) measurement devices, and motion capture cameras, are currently available. However, most of the operational modes of these tools are relatively complex or are only able to capture part of the clinical information; furthermore, using these tools is sometimes time consuming. Currently, there is no tool available for automatically quantifying tremor severity in clinical environments. We aimed to develop a rapid, objective, and quantitative system for measuring the severity of finger tremor using a small portable optical device (Leap Motion). A single test took 15s to conduct, and three algorithms were proposed to quantify the severity of finger tremor. The system was tested with four patients diagnosed with essential tremor. The proposed algorithms were able to quantify different characteristics of tremor in clinical environments, and could be used as references for future clinical assessments. A portable, easy-to-use, small-sized, and noncontact device (Leap Motion) was used to clinically detect and record finger movement, and three algorithms were proposed to describe tremor amplitudes. Copyright © 2016 Elsevier B.V. All rights reserved.
White matter lesion extension to automatic brain tissue segmentation on MRI.
de Boer, Renske; Vrooman, Henri A; van der Lijn, Fedde; Vernooij, Meike W; Ikram, M Arfan; van der Lugt, Aad; Breteler, Monique M B; Niessen, Wiro J
2009-05-01
A fully automated brain tissue segmentation method is optimized and extended with white matter lesion segmentation. Cerebrospinal fluid (CSF), gray matter (GM) and white matter (WM) are segmented by an atlas-based k-nearest neighbor classifier on multi-modal magnetic resonance imaging data. This classifier is trained by registering brain atlases to the subject. The resulting GM segmentation is used to automatically find a white matter lesion (WML) threshold in a fluid-attenuated inversion recovery scan. False positive lesions are removed by ensuring that the lesions are within the white matter. The method was visually validated on a set of 209 subjects. No segmentation errors were found in 98% of the brain tissue segmentations and 97% of the WML segmentations. A quantitative evaluation using manual segmentations was performed on a subset of 6 subjects for CSF, GM and WM segmentation and an additional 14 for the WML segmentations. The results indicated that the automatic segmentation accuracy is close to the interobserver variability of manual segmentations.
NASA Astrophysics Data System (ADS)
Bartholomeus, H.; Kooistra, L.
2012-04-01
For quantitative estimation of soil properties by means of remote sensing, often hyperspectral data are used. But these data are scarce and expensive, which prohibits wider implementation of the developed techniques in agricultural management. For precision agriculture, observations at a high spatial resolution are required. Colour aerial photographs at this scale are widely available, and can be acquired at no of very low costs. Therefore, we investigated whether publically available aerial photographs can be used to a) automatically delineate management zones and b) estimate levels of organic carbon spatially. We selected three study areas within the Netherlands that cover a large variance in soil type (peat, sand, and clay). For the fields of interest, RGB aerial photographs with a spatial resolution of 50 cm were extracted from a publically available data provider. Further pre-processing exists of geo-referencing only. Since the images originate from different sources and are potentially acquired under unknown illumination conditions, the exact radiometric properties of the data are unknown. Therefore, we used spectral indices to emphasize the differences in reflectance and normalize for differences in radiometry. To delineate management zones we used image segmentation techniques, using the derived indices as input. Comparison with management zone maps as used by the farmers shows that there is good correspondence. Regression analysis between a number of soil properties and the derived indices shows that organic carbon is the major explanatory variable for differences in index values within the fields. However, relations do not hold for large regions, indicating that local models will have to be used, which is a problem that is also still relevant for hyperspectral remote sensing data. With this research, we show that low-cost aerial photographs can be a valuable tool for quantitative analysis of organic carbon and automatic delineation of management zones. Since a lot of data are publically available this offers great possibilities for implementing remote sensing techniques in agricultural management.
Intrathoracic airway measurement: ex-vivo validation
NASA Astrophysics Data System (ADS)
Reinhardt, Joseph M.; Raab, Stephen A.; D'Souza, Neil D.; Hoffman, Eric A.
1997-05-01
High-resolution x-ray CT (HRCT) provides detailed images of the lungs and bronchial tree. HRCT-based imaging and quantitation of peripheral bronchial airway geometry provides a valuable tool for assessing regional airway physiology. Such measurements have been sued to address physiological questions related to the mechanics of airway collapse in sleep apnea, the measurement of airway response to broncho-constriction agents, and to evaluate and track the progression of disease affecting the airways, such as asthma and cystic fibrosis. Significant attention has been paid to the measurements of extra- and intra-thoracic airways in 2D sections from volumetric x-ray CT. A variety of manual and semi-automatic techniques have been proposed for airway geometry measurement, including the use of standardized display window and level settings for caliper measurements, methods based on manual or semi-automatic border tracing, and more objective, quantitative approaches such as the use of the 'half-max' criteria. A recently proposed measurements technique uses a model-based deconvolution to estimate the location of the inner and outer airway walls. Validation using a plexiglass phantom indicates that the model-based method is more accurate than the half-max approach for thin-walled structures. In vivo validation of these airway measurement techniques is difficult because of the problems in identifying a reliable measurement 'gold standard.' In this paper we report on ex vivo validation of the half-max and model-based methods using an excised pig lung. The lung is sliced into thin sections of tissue and scanned using an electron beam CT scanner. Airways of interest are measured from the CT images, and also measured with using a microscope and micrometer to obtain a measurement gold standard. The result show no significant difference between the model-based measurements and the gold standard; while the half-max estimates exhibited a measurement bias and were significantly different than the gold standard.
NASA Astrophysics Data System (ADS)
Fan, Shu-Kai S.; Tsai, Du-Ming; Chuang, Wei-Che
2017-04-01
Solar power has become an attractive alternative source of energy. The multi-crystalline solar cell has been widely accepted in the market because it has a relatively low manufacturing cost. Multi-crystalline solar wafers with larger grain sizes and fewer grain boundaries are higher quality and convert energy more efficiently than mono-crystalline solar cells. In this article, a new image processing method is proposed for assessing the wafer quality. An adaptive segmentation algorithm based on region growing is developed to separate the closed regions of individual grains. Using the proposed method, the shape and size of each grain in the wafer image can be precisely evaluated. Two measures of average grain size are taken from the literature and modified to estimate the average grain size. The resulting average grain size estimate dictates the quality of the crystalline solar wafers and can be considered a viable quantitative indicator of conversion efficiency.
NASA Astrophysics Data System (ADS)
Smagin, A. V.; Sadovnikova, N. B.; Smagina, M. V.
2014-06-01
The biodestruction of strongly swelling polymer hydrogels (water adsorbing soil conditioners of the new generation) has been studied at the quantitative level using original mathematical models. In laboratory experiments, a relationship between the hydrogel degradation rate and the temperature has been obtained, and the effect of the biodestruction on the water retention curve of soil compositions with hydrogels (used as an index of their water retention capacity) has been assessed. From the automatic monitoring data of the temperature regime of soils, the potential biodestruction of hydrogels has been predicted for different climatic conditions. The loss of hydrogels during three months of the vegetation period because of destruction can exceed 30% of their initial content in irrigated agriculture under arid climatic conditions and more than 10% under humid climatic conditions. Thus, the biodestruction of hydrogels is one of the most important factors decreasing their efficiency under actual soil conditions.
Raina, Abhay; Hennessy, Ricky; Rains, Michael; Allred, James; Hirshburg, Jason M; Diven, Dayna; Markey, Mia K.
2016-01-01
Background Traditional metrics for evaluating the severity of psoriasis are subjective, which complicates efforts to measure effective treatments in clinical trials. Methods We collected images of psoriasis plaques and calibrated the coloration of the images according to an included color card. Features were extracted from the images and used to train a linear discriminant analysis classifier with cross-validation to automatically classify the degree of erythema. The results were tested against numerical scores obtained by a panel of dermatologists using a standard rating system. Results Quantitative measures of erythema based on the digital color images showed good agreement with subjective assessment of erythema severity (κ = 0.4203). The color calibration process improved the agreement from κ = 0.2364 to κ = 0.4203. Conclusions We propose a method for the objective measurement of the psoriasis severity parameter of erythema and show that the calibration process improved the results. PMID:26517973
3D Reconstruction of the Retinal Arterial Tree Using Subject-Specific Fundus Images
NASA Astrophysics Data System (ADS)
Liu, D.; Wood, N. B.; Xu, X. Y.; Witt, N.; Hughes, A. D.; Samcg, Thom
Systemic diseases, such as hypertension and diabetes, are associated with changes in the retinal microvasculature. Although a number of studies have been performed on the quantitative assessment of the geometrical patterns of the retinal vasculature, previous work has been confined to 2 dimensional (2D) analyses. In this paper, we present an approach to obtain a 3D reconstruction of the retinal arteries from a pair of 2D retinal images acquired in vivo. A simple essential matrix based self-calibration approach was employed for the "fundus camera-eye" system. Vessel segmentation was performed using a semi-automatic approach and correspondence between points from different images was calculated. The results of 3D reconstruction show the centreline of retinal vessels and their 3D curvature clearly. Three-dimensional reconstruction of the retinal vessels is feasible and may be useful in future studies of the retinal vasculature in disease.
On the linear relation between the mean and the standard deviation of a response time distribution.
Wagenmakers, Eric-Jan; Brown, Scott
2007-07-01
Although it is generally accepted that the spread of a response time (RT) distribution increases with the mean, the precise nature of this relation remains relatively unexplored. The authors show that in several descriptive RT distributions, the standard deviation increases linearly with the mean. Results from a wide range of tasks from different experimental paradigms support a linear relation between RT mean and RT standard deviation. Both R. Ratcliff's (1978) diffusion model and G. D. Logan's (1988) instance theory of automatization provide explanations for this linear relation. The authors identify and discuss 3 specific boundary conditions for the linear law to hold. The law constrains RT models and supports the use of the coefficient of variation to (a) compare variability while controlling for differences in baseline speed of processing and (b) assess whether changes in performance with practice are due to quantitative speedup or qualitative reorganization. Copyright 2007 APA.
Trans-dimensional MCMC methods for fully automatic motion analysis in tagged MRI.
Smal, Ihor; Carranza-Herrezuelo, Noemí; Klein, Stefan; Niessen, Wiro; Meijering, Erik
2011-01-01
Tagged magnetic resonance imaging (tMRI) is a well-known noninvasive method allowing quantitative analysis of regional heart dynamics. Its clinical use has so far been limited, in part due to the lack of robustness and accuracy of existing tag tracking algorithms in dealing with low (and intrinsically time-varying) image quality. In this paper, we propose a novel probabilistic method for tag tracking, implemented by means of Bayesian particle filtering and a trans-dimensional Markov chain Monte Carlo (MCMC) approach, which efficiently combines information about the imaging process and tag appearance with prior knowledge about the heart dynamics obtained by means of non-rigid image registration. Experiments using synthetic image data (with ground truth) and real data (with expert manual annotation) from preclinical (small animal) and clinical (human) studies confirm that the proposed method yields higher consistency, accuracy, and intrinsic tag reliability assessment in comparison with other frequently used tag tracking methods.
Petri-net-based 2D design of DNA walker circuits.
Gilbert, David; Heiner, Monika; Rohr, Christian
2018-01-01
We consider localised DNA computation, where a DNA strand walks along a binary decision graph to compute a binary function. One of the challenges for the design of reliable walker circuits consists in leakage transitions, which occur when a walker jumps into another branch of the decision graph. We automatically identify leakage transitions, which allows for a detailed qualitative and quantitative assessment of circuit designs, design comparison, and design optimisation. The ability to identify leakage transitions is an important step in the process of optimising DNA circuit layouts where the aim is to minimise the computational error inherent in a circuit while minimising the area of the circuit. Our 2D modelling approach of DNA walker circuits relies on coloured stochastic Petri nets which enable functionality, topology and dimensionality all to be integrated in one two-dimensional model. Our modelling and analysis approach can be easily extended to 3-dimensional walker systems.
Assessment of the Denver Regional Transportation District's automatic vehicle location system
DOT National Transportation Integrated Search
2000-08-01
The purpose of this evaluation was to determine how well the Denver Regional Transportation District's (RTD) automatic vehicle location (AVL) system achieved its major objectives of improving scheduling efficiency, improving the ability of dispatcher...
Assessment of WMATA's Automatic Fare Collection Equipment Performance
DOT National Transportation Integrated Search
1981-01-01
The Washington Metropolitan Area Transit Authority (WMATA) has had an Automatic Fare Collection (AFC) system in operation since June 1977. The AFC system, comprised of entry/exit gates, farecard vendors, and addfare machines, initially encountered ma...
Kampf, Günter; Degenhardt, Stina; Lackner, Sibylle; Ostermeyer, Christiane
2014-01-01
Background: It has recently been reported that reusable dispensers for surface disinfection tissues may be contaminated, especially with adapted Achromobacter species 3, when products based on surface-active ingredients are used. Fresh solution may quickly become recontaminated if dispensers are not processed adequately. Methods: We evaluated the abilities of six manual and three automatic processes for processing contaminated dispensers to prevent recolonisation of a freshly-prepared disinfectant solution (Mikrobac forte 0.5%). Dispensers were left at room temperature for 28 days. Samples of the disinfectant solution were taken every 7 days and assessed quantitatively for bacterial contamination. Results: All automatic procedures prevented recolonisation of the disinfectant solution when a temperature of 60–70°C was ensured for at least 5 min, with or without the addition of chemical cleaning agents. Manual procedures prevented recontamination of the disinfectant solution when rinsing with hot water or a thorough cleaning step was performed before treating all surfaces with an alcohol-based disinfectant or an oxygen-releaser. Other cleaning and disinfection procedures, including the use of an alcohol-based disinfectant, did not prevent recolonisation. Conclusions: These results indicate that not all processes are effective for processing reusable dispensers for surface-disinfectant tissues, and that a high temperature during the cleaning step or use of a biofilm-active cleaning agent are essential. PMID:24653973
Rayarao, Geetha; Biederman, Robert W W; Williams, Ronald B; Yamrozik, June A; Lombardi, Richard; Doyle, Mark
2018-01-01
To establish the clinical validity and accuracy of automatic thresholding and manual trimming (ATMT) by comparing the method with the conventional contouring method for in vivo cardiac volume measurements. CMR was performed on 40 subjects (30 patients and 10 controls) using steady-state free precession cine sequences with slices oriented in the short-axis and acquired contiguously from base to apex. Left ventricular (LV) volumes, end-diastolic volume, end-systolic volume, and stroke volume (SV) were obtained with ATMT and with the conventional contouring method. Additionally, SV was measured independently using CMR phase velocity mapping (PVM) of the aorta for validation. Three methods of calculating SV were compared by applying Bland-Altman analysis. The Bland-Altman standard deviation of variation (SD) and offset bias for LV SV for the three sets of data were: ATMT-PVM (7.65, [Formula: see text]), ATMT-contours (7.85, [Formula: see text]), and contour-PVM (11.01, 4.97), respectively. Equating the observed range to the error contribution of each approach, the error magnitude of ATMT:PVM:contours was in the ratio 1:2.4:2.5. Use of ATMT for measuring ventricular volumes accommodates trabeculae and papillary structures more intuitively than contemporary contouring methods. This results in lower variation when analyzing cardiac structure and function and consequently improved accuracy in assessing chamber volumes.
Searching for biomarkers of CDKL5 disorder: early-onset visual impairment in CDKL5 mutant mice.
Mazziotti, Raffaele; Lupori, Leonardo; Sagona, Giulia; Gennaro, Mariangela; Della Sala, Grazia; Putignano, Elena; Pizzorusso, Tommaso
2017-06-15
CDKL5 disorder is a neurodevelopmental disorder still without a cure. Murine models of CDKL5 disorder have been recently generated raising the possibility of preclinical testing of treatments. However, unbiased, quantitative biomarkers of high translational value to monitor brain function are still missing. Moreover, the analysis of treatment is hindered by the challenge of repeatedly and non-invasively testing neuronal function. We analyzed the development of visual responses in a mouse model of CDKL5 disorder to introduce visually evoked responses as a quantitative method to assess cortical circuit function. Cortical visual responses were assessed in CDKL5 null male mice, heterozygous females, and their respective control wild-type littermates by repeated transcranial optical imaging from P27 until P32. No difference between wild-type and mutant mice was present at P25-P26 whereas defective responses appeared from P27-P28 both in heterozygous and homozygous CDKL5 mutant mice. These results were confirmed by visually evoked potentials (VEPs) recorded from the visual cortex of a different cohort. The previously imaged mice were also analyzed at P60-80 using VEPs, revealing a persistent reduction of response amplitude, reduced visual acuity and defective contrast function. The level of adult impairment was significantly correlated with the reduction in visual responses observed during development. Support vector machine showed that multi-dimensional visual assessment can be used to automatically classify mutant and wt mice with high reliability. Thus, monitoring visual responses represents a promising biomarker for preclinical and clinical studies on CDKL5 disorder. © The Author 2017. Published by Oxford University Press.
Humphries, Stephen M; Yagihashi, Kunihiro; Huckleberry, Jason; Rho, Byung-Hak; Schroeder, Joyce D; Strand, Matthew; Schwarz, Marvin I; Flaherty, Kevin R; Kazerooni, Ella A; van Beek, Edwin J R; Lynch, David A
2017-10-01
Purpose To evaluate associations between pulmonary function and both quantitative analysis and visual assessment of thin-section computed tomography (CT) images at baseline and at 15-month follow-up in subjects with idiopathic pulmonary fibrosis (IPF). Materials and Methods This retrospective analysis of preexisting anonymized data, collected prospectively between 2007 and 2013 in a HIPAA-compliant study, was exempt from additional institutional review board approval. The extent of lung fibrosis at baseline inspiratory chest CT in 280 subjects enrolled in the IPF Network was evaluated. Visual analysis was performed by using a semiquantitative scoring system. Computer-based quantitative analysis included CT histogram-based measurements and a data-driven textural analysis (DTA). Follow-up CT images in 72 of these subjects were also analyzed. Univariate comparisons were performed by using Spearman rank correlation. Multivariate and longitudinal analyses were performed by using a linear mixed model approach, in which models were compared by using asymptotic χ 2 tests. Results At baseline, all CT-derived measures showed moderate significant correlation (P < .001) with pulmonary function. At follow-up CT, changes in DTA scores showed significant correlation with changes in both forced vital capacity percentage predicted (ρ = -0.41, P < .001) and diffusing capacity for carbon monoxide percentage predicted (ρ = -0.40, P < .001). Asymptotic χ 2 tests showed that inclusion of DTA score significantly improved fit of both baseline and longitudinal linear mixed models in the prediction of pulmonary function (P < .001 for both). Conclusion When compared with semiquantitative visual assessment and CT histogram-based measurements, DTA score provides additional information that can be used to predict diminished function. Automatic quantification of lung fibrosis at CT yields an index of severity that correlates with visual assessment and functional change in subjects with IPF. © RSNA, 2017.
Nintendo Wii assessment of Hoehn and Yahr score with Parkinson's disease tremor.
Koçer, Abdulkadir; Oktay, Ayse Betul
2016-01-01
Diagnosis of Parkinson's Disease (PD) by analyzing the resting tremor were much studied by using different accelerometer based methods, however the quantitative assessment of Hoehn and Yahr Scale (HYS) score with a machine learning based system has not been previously addressed. In this study, we aimed to propose a system to automatically assess the HYS score of patients with PD. The system was evaluated and tested on a dataset containing 55 subjects where 35 of them were patients and 20 of them were healthy controls. The resting tremor data were gathered with the 3 axis accelerometer of the Nintendo Wii (Wiimote). The clinical disability of the PD was graded from 1 to 5 by the HYS and tremor was recorded twice from the more affected side in each patient and from the dominant extremity in each control for a 60 seconds period. The HYS scores were learned with Support Vector Machines (SVM) from the features of the tremor data. Thirty-two of the subjects with PD were classified correctly and 18 of the normal subjects were also classified correctly by our system. The system had average 0.89 accuracy rate (Range: 81-100% changing according to grading by HYS). We compared quantitative measurements of hand tremor in PD patients, with staging of PD based on accelerometer data gathered using the Wii sensor. Our results showed that the machine learning based system with simple features could be helpful for diagnosis of PD and estimate HYS score. We believed that this portable and easy-to-use Wii sensor measure might also be applicable in the continuous monitoring of the resting tremor with small modifications in routine clinical use.
Rüsch, Nicolas; Todd, Andrew R.; Bodenhausen, Galen V.; Olschewski, Manfred; Corrigan, Patrick W.
2009-01-01
Perceived legitimacy of discrimination shapes reactions to mental illness stigma among stigmatized individuals. We assessed deliberately endorsed versus automatic shame-related reactions to mental illness as predictors of change in perceived legitimacy of discrimination over six months among 75 people with mental illness. Automatically activated shame-related associations with mental illness were measured using the Brief Implicit Association Test, deliberately endorsed beliefs via self-report. Controlling for depression and perceived stigma, stronger baseline automatic shame-related associations, but not deliberately endorsed beliefs, predicted higher perceived legitimacy of discrimination after six months. Automatically activated shame reactions may increase vulnerability to mental illness stigma. PMID:19897173
Quantitative myocardial blood flow imaging with integrated time-of-flight PET-MR.
Kero, Tanja; Nordström, Jonny; Harms, Hendrik J; Sörensen, Jens; Ahlström, Håkan; Lubberink, Mark
2017-12-01
The use of integrated PET-MR offers new opportunities for comprehensive assessment of cardiac morphology and function. However, little is known on the quantitative accuracy of cardiac PET imaging with integrated time-of-flight PET-MR. The aim of the present work was to validate the GE Signa PET-MR scanner for quantitative cardiac PET perfusion imaging. Eleven patients (nine male; mean age 59 years; range 46-74 years) with known or suspected coronary artery disease underwent 15 O-water PET scans at rest and during adenosine-induced hyperaemia on a GE Discovery ST PET-CT and a GE Signa PET-MR scanner. PET-MR images were reconstructed using settings recommended by the manufacturer, including time-of-flight (TOF). Data were analysed semi-automatically using Cardiac VUer software, resulting in both parametric myocardial blood flow (MBF) images and segment-based MBF values. Correlation and agreement between PET-CT-based and PET-MR-based MBF values for all three coronary artery territories were assessed using regression analysis and intra-class correlation coefficients (ICC). In addition to the cardiac PET-MR reconstruction protocol as recommended by the manufacturer, comparisons were made using a PET-CT resolution-matched reconstruction protocol both without and with TOF to assess the effect of time-of-flight and reconstruction parameters on quantitative MBF values. Stress MBF data from one patient was excluded due to movement during the PET-CT scanning. Mean MBF values at rest and stress were (0.92 ± 0.12) and (2.74 ± 1.37) mL/g/min for PET-CT and (0.90 ± 0.23) and (2.65 ± 1.15) mL/g/min for PET-MR (p = 0.33 and p = 0.74). ICC between PET-CT-based and PET-MR-based regional MBF was 0.98. Image quality was improved with PET-MR as compared to PET-CT. ICC between PET-MR-based regional MBF with and without TOF and using different filter and reconstruction settings was 1.00. PET-MR-based MBF values correlated well with PET-CT-based MBF values and the parametric PET-MR images were excellent. TOF and reconstruction settings had little impact on MBF values.
NASA Astrophysics Data System (ADS)
Martins, T. M.; Kelman, R.; Metello, M.; Ciarlini, A.; Granville, A. C.; Hespanhol, P.; Castro, T. L.; Gottin, V. M.; Pereira, M. V. F.
2015-12-01
The hydroelectric potential of a river is proportional to its head and water flows. Selecting the best development alternative for Greenfield projects watersheds is a difficult task, since it must balance demands for infrastructure, especially in the developing world where a large potential remains unexplored, with environmental conservation. Discussions usually diverge into antagonistic views, as in recent projects in the Amazon forest, for example. This motivates the construction of a computational tool that will support a more qualified debate regarding development/conservation options. HERA provides the optimal head division partition of a river considering technical, economic and environmental aspects. HERA has three main components: (i) pre-processing GIS of topographic and hydrologic data; (ii) automatic engineering and equipment design and budget estimation for candidate projects; (iii) translation of division-partition problem into a mathematical programming model. By integrating an automatic calculation with geoprocessing tools, cloud computation and optimization techniques, HERA makes it possible countless head partition division alternatives to be intrinsically compared - a great advantage with respect to traditional field surveys followed by engineering design methods. Based on optimization techniques, HERA determines which hydro plants should be built, including location, design, technical data (e.g. water head, reservoir area and volume, engineering design (dam, spillways, etc.) and costs). The results can be visualized in the HERA interface, exported to GIS software, Google Earth or CAD systems. HERA has a global scope of application since the main input data area a Digital Terrain Model and water inflows at gauging stations. The objective is to contribute to an increased rationality of decisions by presenting to the stakeholders a clear and quantitative view of the alternatives, their opportunities and threats.
Pouch, Alison M; Vergnat, Mathieu; McGarvey, Jeremy R; Ferrari, Giovanni; Jackson, Benjamin M; Sehgal, Chandra M; Yushkevich, Paul A; Gorman, Robert C; Gorman, Joseph H
2014-01-01
The basis of mitral annuloplasty ring design has progressed from qualitative surgical intuition to experimental and theoretical analysis of annular geometry with quantitative imaging techniques. In this work, we present an automated three-dimensional (3D) echocardiographic image analysis method that can be used to statistically assess variability in normal mitral annular geometry to support advancement in annuloplasty ring design. Three-dimensional patient-specific models of the mitral annulus were automatically generated from 3D echocardiographic images acquired from subjects with normal mitral valve structure and function. Geometric annular measurements including annular circumference, annular height, septolateral diameter, intercommissural width, and the annular height to intercommissural width ratio were automatically calculated. A mean 3D annular contour was computed, and principal component analysis was used to evaluate variability in normal annular shape. The following mean ± standard deviations were obtained from 3D echocardiographic image analysis: annular circumference, 107.0 ± 14.6 mm; annular height, 7.6 ± 2.8 mm; septolateral diameter, 28.5 ± 3.7 mm; intercommissural width, 33.0 ± 5.3 mm; and annular height to intercommissural width ratio, 22.7% ± 6.9%. Principal component analysis indicated that shape variability was primarily related to overall annular size, with more subtle variation in the skewness and height of the anterior annular peak, independent of annular diameter. Patient-specific 3D echocardiographic-based modeling of the human mitral valve enables statistical analysis of physiologically normal mitral annular geometry. The tool can potentially lead to the development of a new generation of annuloplasty rings that restore the diseased mitral valve annulus back to a truly normal geometry. Copyright © 2014 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hsu, Kuo-Hsien
2012-11-01
Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.
Choël, Marie; Deboudt, Karine; Osán, János; Flament, Pascal; Van Grieken, René
2005-09-01
Atmospheric aerosols consist of a complex heterogeneous mixture of particles. Single-particle analysis techniques are known to provide unique information on the size-resolved chemical composition of aerosols. A scanning electron microscope (SEM) combined with a thin-window energy-dispersive X-ray (EDX) detector enables the morphological and elemental analysis of single particles down to 0.1 microm with a detection limit of 1-10 wt %, low-Z elements included. To obtain data statistically representative of the air masses sampled, a computer-controlled procedure can be implemented in order to run hundreds of single-particle analyses (typically 1000-2000) automatically in a relatively short period of time (generally 4-8 h, depending on the setup and on the particle loading). However, automated particle analysis by SEM-EDX raises two practical challenges: the accuracy of the particle recognition and the reliability of the quantitative analysis, especially for micrometer-sized particles with low atomic number contents. Since low-Z analysis is hampered by the use of traditional polycarbonate membranes, an alternate choice of substrate is a prerequisite. In this work, boron is being studied as a promising material for particle microanalysis. As EDX is generally said to probe a volume of approximately 1 microm3, geometry effects arise from the finite size of microparticles. These particle geometry effects must be corrected by means of a robust concentration calculation procedure. Conventional quantitative methods developed for bulk samples generate elemental concentrations considerably in error when applied to microparticles. A new methodology for particle microanalysis, combining the use of boron as the substrate material and a reverse Monte Carlo quantitative program, was tested on standard particles ranging from 0.25 to 10 microm. We demonstrate that the quantitative determination of low-Z elements in microparticles is achievable and that highly accurate results can be obtained using the automatic data processing described here compared to conventional methods.
Evaluation of Automatic Vehicle Location accuracy
DOT National Transportation Integrated Search
1999-01-01
This study assesses the accuracy of the Automatic Vehicle Location (AVL) data provided for the buses of the Ann Arbor Transportation Authority with Global Positioning System (GPS) technology. In a sample of eighty-nine bus trips two kinds of accuracy...
Evaluation of Prototype Automatic Truck Rollover Warning Systems
DOT National Transportation Integrated Search
1998-01-01
Three operating prototype Automatic Truck Rollover Warning Systems (ATRWS) installed on the Capital Beltway in Maryland and Virginia were evaluated for 3 years. The general objectives of this evaluation were to assess how the ATRWS performed and to d...
Roadway system assessment using bluetooth-based automatic vehicle identification travel time data.
DOT National Transportation Integrated Search
2012-12-01
This monograph is an exposition of several practice-ready methodologies for automatic vehicle identification (AVI) data collection : systems. This includes considerations in the physical setup of the collection system as well as the interpretation of...
NASA Astrophysics Data System (ADS)
Udpa, Nitin; Sampat, Mehul P.; Kim, Min Soon; Reece, Gregory P.; Markey, Mia K.
2007-03-01
The contemporary goals of breast cancer treatment are not limited to cure but include maximizing quality of life. All breast cancer treatment can adversely affect breast appearance. Developing objective, quantifiable methods to assess breast appearance is important to understand the impact of deformity on patient quality of life, guide selection of current treatments, and make rational treatment advances. A few measures of aesthetic properties such as symmetry have been developed. They are computed from the distances between manually identified fiducial points on digital photographs. However, this is time-consuming and subject to intra- and inter-observer variability. The purpose of this study is to investigate methods for automatic localization of fiducial points on anterior-posterior digital photographs taken to document the outcomes of breast reconstruction. Particular emphasis is placed on automatic localization of the nipple complex since the most widely used aesthetic measure, the Breast Retraction Assessment, quantifies the symmetry of nipple locations. The nipple complexes are automatically localized using normalized cross-correlation with a template bank of variants of Gaussian and Laplacian of Gaussian filters. A probability map of likely nipple locations determined from the image database is used to reduce the number of false positive detections from the matched filter operation. The accuracy of the nipple detection was evaluated relative to markings made by three human observers. The impact of using the fiducial point locations as identified by the automatic method, as opposed to the manual method, on the calculation of the Breast Retraction Assessment was also evaluated.
Fu, Yili; Gao, Wenpeng; Chen, Xiaoguang; Zhu, Minwei; Shen, Weigao; Wang, Shuguo
2010-01-01
The reference system based on the fourth ventricular landmarks (including the fastigial point and ventricular floor plane) is used in medical image analysis of the brain stem. The objective of this study was to develop a rapid, robust, and accurate method for the automatic identification of this reference system on T1-weighted magnetic resonance images. The fully automated method developed in this study consisted of four stages: preprocessing of the data set, expectation-maximization algorithm-based extraction of the fourth ventricle in the region of interest, a coarse-to-fine strategy for identifying the fastigial point, and localization of the base point. The method was evaluated on 27 Brain Web data sets qualitatively and 18 Internet Brain Segmentation Repository data sets and 30 clinical scans quantitatively. The results of qualitative evaluation indicated that the method was robust to rotation, landmark variation, noise, and inhomogeneity. The results of quantitative evaluation indicated that the method was able to identify the reference system with an accuracy of 0.7 +/- 0.2 mm for the fastigial point and 1.1 +/- 0.3 mm for the base point. It took <6 seconds for the method to identify the related landmarks on a personal computer with an Intel Core 2 6300 processor and 2 GB of random-access memory. The proposed method for the automatic identification of the reference system based on the fourth ventricular landmarks was shown to be rapid, robust, and accurate. The method has potentially utility in image registration and computer-aided surgery.
Automated skin segmentation in ultrasonic evaluation of skin toxicity in breast cancer radiotherapy.
Gao, Yi; Tannenbaum, Allen; Chen, Hao; Torres, Mylin; Yoshida, Emi; Yang, Xiaofeng; Wang, Yuefeng; Curran, Walter; Liu, Tian
2013-11-01
Skin toxicity is the most common side effect of breast cancer radiotherapy and impairs the quality of life of many breast cancer survivors. We, along with other researchers, have recently found quantitative ultrasound to be effective as a skin toxicity assessment tool. Although more reliable than standard clinical evaluations (visual observation and palpation), the current procedure for ultrasound-based skin toxicity measurements requires manual delineation of the skin layers (i.e., epidermis-dermis and dermis-hypodermis interfaces) on each ultrasound B-mode image. Manual skin segmentation is time consuming and subjective. Moreover, radiation-induced skin injury may decrease image contrast between the dermis and hypodermis, which increases the difficulty of delineation. Therefore, we have developed an automatic skin segmentation tool (ASST) based on the active contour model with two significant modifications: (i) The proposed algorithm introduces a novel dual-curve scheme for the double skin layer extraction, as opposed to the original single active contour method. (ii) The proposed algorithm is based on a geometric contour framework as opposed to the previous parametric algorithm. This ASST algorithm was tested on a breast cancer image database of 730 ultrasound breast images (73 ultrasound studies of 23 patients). We compared skin segmentation results obtained with the ASST with manual contours performed by two physicians. The average percentage differences in skin thickness between the ASST measurement and that of each physician were less than 5% (4.8 ± 17.8% and -3.8 ± 21.1%, respectively). In summary, we have developed an automatic skin segmentation method that ensures objective assessment of radiation-induced changes in skin thickness. Our ultrasound technology offers a unique opportunity to quantify tissue injury in a more meaningful and reproducible manner than the subjective assessments currently employed in the clinic. Copyright © 2013 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Petridou, E; Kibiro, M; Gladwell, C; Malcolm, P; Toms, A; Juette, A; Borga, M; Dahlqvist Leinhard, O; Romu, T; Kasmai, B; Denton, E
2017-07-01
To compare magnetic resonance imaging (MRI)-derived breast density measurements using automatic segmentation algorithms with radiologist estimations using the Breast Imaging Reporting and Data Systems (BI-RADS) density classification. Forty women undergoing mammography and dynamic breast MRI as part of their clinical management were recruited. Fat-water separated MRI images derived from a two-point Dixon technique, phase-sensitive reconstruction, and atlas-based segmentation were obtained before and after intravenous contrast medium administration. Breast density was assessed using software from Advanced MR Analytics (AMRA), Linköping, Sweden, with results compared to the widely used four-quartile quantitative BI-RADS scale. The proportion of glandular tissue in the breast on MRI was derived from the AMRA sequence. The mean unenhanced breast density was 0.31±0.22 (mean±SD; left) and 0.29±0.21 (right). Mean breast density on post-contrast images was 0.32±0.19 (left) and 0.32±0.2 (right). There was "almost perfect" correlation between pre- and post-contrast breast density quantification: Spearman's correlation rho=0.98 (95% confidence intervals [CI]: 0.97-0.99; left) and rho=0.99 (95% CI: 0.98-0.99; right). The 95% limits of agreement were -0.11-0.08 (left) and -0.08-0.03 (right). Interobserver reliability for BI-RADS was "substantial": weighted Kappa k=0.8 (95% CI: 0.74-0.87). The Spearman correlation coefficient between BI-RADS and MRI breast density was rho=0.73 (95% CI: 0.60-0.82; left) and rho=0.75 (95% CI: 0.63-0.83; right) which was also "substantial". The AMRA sequence provides a fully automated, reproducible, objective assessment of fibroglandular breast tissue proportion that correlates well with mammographic assessment of breast density with the added advantage of avoidance of ionising radiation. Copyright © 2017 The Royal College of Radiologists. All rights reserved.
The Fringe Reading Facility at the Max-Planck-Institut fuer Stroemungsforschung
NASA Astrophysics Data System (ADS)
Becker, F.; Meier, G. E. A.; Wegner, H.; Timm, R.; Wenskus, R.
1987-05-01
A Mach-Zehnder interferometer is used for optical flow measurements in a transonic wind tunnel. Holographic interferograms are reconstructed by illumination with a He-Ne-laser and viewed by a video camera through wide angle optics. This setup was used for investigating industrial double exposure holograms of truck tires in order to develop methods of automatic recognition of certain manufacturing faults. Automatic input is achieved by a transient recorder digitizing the output of a TV camera and transferring the digitized data to a PDP11-34. Interest centered around sequences of interferograms showing the interaction of vortices with a profile and subsequent emission of sound generated by this process. The objective is the extraction of quantitative data which relates to the emission of noise.
The Fringe Reading Facility at the Max-Planck-Institut fuer Stroemungsforschung
NASA Technical Reports Server (NTRS)
Becker, F.; Meier, G. E. A.; Wegner, H.; Timm, R.; Wenskus, R.
1987-01-01
A Mach-Zehnder interferometer is used for optical flow measurements in a transonic wind tunnel. Holographic interferograms are reconstructed by illumination with a He-Ne-laser and viewed by a video camera through wide angle optics. This setup was used for investigating industrial double exposure holograms of truck tires in order to develop methods of automatic recognition of certain manufacturing faults. Automatic input is achieved by a transient recorder digitizing the output of a TV camera and transferring the digitized data to a PDP11-34. Interest centered around sequences of interferograms showing the interaction of vortices with a profile and subsequent emission of sound generated by this process. The objective is the extraction of quantitative data which relates to the emission of noise.
Measurement Marker Recognition In A Time Sequence Of Infrared Images For Biomedical Applications
NASA Astrophysics Data System (ADS)
Fiorini, A. R.; Fumero, R.; Marchesi, R.
1986-03-01
In thermographic measurements, quantitative surface temperature evaluation is often uncertain. The main reason is in the lack of available reference points in transient conditions. Reflective markers were used for automatic marker recognition and pixel coordinate computations. An algorithm selects marker icons to match marker references where particular luminance conditions are satisfied. Automatic marker recognition allows luminance compensation and temperature calibration of recorded infrared images. A biomedical application is presented: the dynamic behaviour of the surface temperature distributions is investigated in order to study the performance of two different pumping systems for extracorporeal circulation. Sequences of images are compared and results are discussed. Finally, the algorithm allows to monitor the experimental environment and to alert for the presence of unusual experimental conditions.
Automatic measurement of target crossing speed
NASA Astrophysics Data System (ADS)
Wardell, Mark; Lougheed, James H.
1992-11-01
The motion of ground vehicle targets after a ballistic round is launched can be a major source of inaccuracy for small (handheld) anti-armour weapon systems. A method of automatically measuring the crossing component to compensate the fire control solution has been devised and tested against various targets in a range of environments. A photodetector array aligned with the sight's horizontal reticle obtains scene features, which are digitized and processed to separate target from sight motion. Relative motion of the target against the background is briefly monitored to deduce angular crossing rate and a compensating lead angle is introduced into the aim point. Research to gather quantitative data and optimize algorithm performance is described, and some results from field testing are presented.
A review of automatic mass detection and segmentation in mammographic images.
Oliver, Arnau; Freixenet, Jordi; Martí, Joan; Pérez, Elsa; Pont, Josep; Denton, Erika R E; Zwiggelaar, Reyer
2010-04-01
The aim of this paper is to review existing approaches to the automatic detection and segmentation of masses in mammographic images, highlighting the key-points and main differences between the used strategies. The key objective is to point out the advantages and disadvantages of the various approaches. In contrast with other reviews which only describe and compare different approaches qualitatively, this review also provides a quantitative comparison. The performance of seven mass detection methods is compared using two different mammographic databases: a public digitised database and a local full-field digital database. The results are given in terms of Receiver Operating Characteristic (ROC) and Free-response Receiver Operating Characteristic (FROC) analysis. Copyright 2009 Elsevier B.V. All rights reserved.
Ishikawa, Masahiro; Murakami, Yuri; Ahi, Sercan Taha; Yamaguchi, Masahiro; Kobayashi, Naoki; Kiyuna, Tomoharu; Yamashita, Yoshiko; Saito, Akira; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie
2016-01-01
Abstract. This paper proposes a digital image analysis method to support quantitative pathology by automatically segmenting the hepatocyte structure and quantifying its morphological features. To structurally analyze histopathological hepatic images, we isolate the trabeculae by extracting the sinusoids, fat droplets, and stromata. We then measure the morphological features of the extracted trabeculae, divide the image into cords, and calculate the feature values of the local cords. We propose a method of calculating the nuclear–cytoplasmic ratio, nuclear density, and number of layers using the local cords. Furthermore, we evaluate the effectiveness of the proposed method using surgical specimens. The proposed method was found to be an effective method for the quantification of the Edmondson grade. PMID:27335894
Health smart home: towards an assistant tool for automatic assessment of the dependence of elders.
Le, Xuan Hoa Binh; Di Mascolo, Maria; Gouin, Alexia; Noury, Norbert
2007-01-01
In order to help elders living alone to age in place independently and safely, it can be useful to have an assistant tool that can automatically assess their dependence and issue an alert if there is any loss of autonomy. The dependence can be assessed by the degree of performance, by the elders, of activities of daily living. This article presents an approach enabling the activity recognition for an elder living alone in a Health Smart Home equipped with noninvasive sensors.
Rodenacker, K; Aubele, M; Hutzler, P; Adiga, P S
1997-01-01
In molecular pathology numerical chromosome aberrations have been found to be decisive for the prognosis of malignancy in tumours. The existence of such aberrations can be detected by interphase fluorescence in situ hybridization (FISH). The gain or loss of certain base sequences in the desoxyribonucleic acid (DNA) can be estimated by counting the number of FISH signals per cell nucleus. The quantitative evaluation of such events is a necessary condition for a prospective use in diagnostic pathology. To avoid occlusions of signals, the cell nucleus has to be analyzed in three dimensions. Confocal laser scanning microscopy is the means to obtain series of optical thin sections from fluorescence stained or marked material to fulfill the conditions mentioned above. A graphical user interface (GUI) to a software package for display, inspection, count and (semi-)automatic analysis of 3-D images for pathologists is outlined including the underlying methods of 3-D image interaction and segmentation developed. The preparative methods are briefly described. Main emphasis is given to the methodical questions of computer-aided analysis of large 3-D image data sets for pathologists. Several automated analysis steps can be performed for segmentation and succeeding quantification. However tumour material is in contrast to isolated or cultured cells even for visual inspection, a difficult material. For the present a fully automated digital image analysis of 3-D data is not in sight. A semi-automatic segmentation method is thus presented here.
NASA Astrophysics Data System (ADS)
Zhang, Min; Katsumata, Akitoshi; Muramatsu, Chisako; Hara, Takeshi; Suzuki, Hiroki; Fujita, Hiroshi
2014-03-01
Periodontal disease is a kind of typical dental diseases, which affects many adults. The presence of alveolar bone resorption, which can be observed from dental panoramic radiographs, is one of the most important signs of the progression of periodontal disease. Automatically evaluating alveolar-bone resorption is of important clinic meaning in dental radiology. The purpose of this study was to propose a novel system for automated alveolar-bone-resorption evaluation from digital dental panoramic radiographs for the first time. The proposed system enables visualization and quantitative evaluation of alveolar bone resorption degree surrounding the teeth. It has the following procedures: (1) pre-processing for a test image; (2) detection of tooth root apices with Gabor filter and curve fitting for the root apex line; (3) detection of features related with alveolar bone by using image phase congruency map and template matching and curving fitting for the alveolar line; (4) detection of occlusion line with selected Gabor filter; (5) finally, evaluation of the quantitative alveolar-bone-resorption degree in the area surrounding teeth by simply computing the average ratio of the height of the alveolar bone and the height of the teeth. The proposed scheme was applied to 30 patient cases of digital panoramic radiographs, with alveolar bone resorption of different stages. Our initial trial on these test cases indicates that the quantitative evaluation results are correlated with the alveolar-boneresorption degree, although the performance still needs further improvement. Therefore it has potential clinical practicability.
Predicting Response to Neoadjuvant Chemotherapy with PET Imaging Using Convolutional Neural Networks
Ypsilantis, Petros-Pavlos; Siddique, Musib; Sohn, Hyon-Mok; Davies, Andrew; Cook, Gary; Goh, Vicky; Montana, Giovanni
2015-01-01
Imaging of cancer with 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) has become a standard component of diagnosis and staging in oncology, and is becoming more important as a quantitative monitor of individual response to therapy. In this article we investigate the challenging problem of predicting a patient’s response to neoadjuvant chemotherapy from a single 18F-FDG PET scan taken prior to treatment. We take a “radiomics” approach whereby a large amount of quantitative features is automatically extracted from pretherapy PET images in order to build a comprehensive quantification of the tumor phenotype. While the dominant methodology relies on hand-crafted texture features, we explore the potential of automatically learning low- to high-level features directly from PET scans. We report on a study that compares the performance of two competing radiomics strategies: an approach based on state-of-the-art statistical classifiers using over 100 quantitative imaging descriptors, including texture features as well as standardized uptake values, and a convolutional neural network, 3S-CNN, trained directly from PET scans by taking sets of adjacent intra-tumor slices. Our experimental results, based on a sample of 107 patients with esophageal cancer, provide initial evidence that convolutional neural networks have the potential to extract PET imaging representations that are highly predictive of response to therapy. On this dataset, 3S-CNN achieves an average 80.7% sensitivity and 81.6% specificity in predicting non-responders, and outperforms other competing predictive models. PMID:26355298
NASA Astrophysics Data System (ADS)
Shvelidze, Teimuraz; Malyuto, Valeri
2015-08-01
Quantitative spectral classification of F, G and K stars with the 70-cm telescope of the Ambastumani Astrophysical Observatory in areas of the main meridional section of the Galaxy, and for which proper motion data are available, has been performed. Fundamental parameters have been obtained for several hundred stars. Space densities of stars of different spectral types, the stellar luminosity function and the relationships between the kinematics and metallicity of stars have been studied. The results have confirmed and completed the conclusions made on the basis of some previous spectroscopic and photometric surveys. Many plates have been obtained for other important directions in the sky: the Kapteyn areas, the Galactic anticentre, the main meridional section of the Galaxy and etc. Very rich collection of photographic objective spectral plates (30,000 were accumulated during last 60 years) is available at Abastumani Observatory-wavelength range 3900-4900 A, about 2A resolution. Availability of new devices for automatic registration of spectra from photographic plates as well as some recently developed classification techniques may allow now to create a modern system of automatic spectral classification and with expension of classification techniques to additional types (B-A, M spectral classes). The data can be treated with the same quantitative method applied here. This method may also be applied to other available and future spectroscopic data of similar resolution, notably that obtained with large format CCD detectors on Schmidt-type telescopes.
Towards automation of user interface design
NASA Technical Reports Server (NTRS)
Gastner, Rainer; Kraetzschmar, Gerhard K.; Lutz, Ernst
1992-01-01
This paper suggests an approach to automatic software design in the domain of graphical user interfaces. There are still some drawbacks in existing user interface management systems (UIMS's) which basically offer only quantitative layout specifications via direct manipulation. Our approach suggests a convenient way to get a default graphical user interface which may be customized and redesigned easily in further prototyping cycles.
Quantitative high-throughput population dynamics in continuous-culture by automated microscopy.
Merritt, Jason; Kuehn, Seppe
2016-09-12
We present a high-throughput method to measure abundance dynamics in microbial communities sustained in continuous-culture. Our method uses custom epi-fluorescence microscopes to automatically image single cells drawn from a continuously-cultured population while precisely controlling culture conditions. For clonal populations of Escherichia coli our instrument reveals history-dependent resilience and growth rate dependent aggregation.
ERIC Educational Resources Information Center
Pavlik, Philip I. Jr.; Cen, Hao; Koedinger, Kenneth R.
2009-01-01
This paper describes a novel method to create a quantitative model of an educational content domain of related practice item-types using learning curves. By using a pairwise test to search for the relationships between learning curves for these item-types, we show how the test results in a set of pairwise transfer relationships that can be…
2L-PCA: a two-level principal component analyzer for quantitative drug design and its applications.
Du, Qi-Shi; Wang, Shu-Qing; Xie, Neng-Zhong; Wang, Qing-Yan; Huang, Ri-Bo; Chou, Kuo-Chen
2017-09-19
A two-level principal component predictor (2L-PCA) was proposed based on the principal component analysis (PCA) approach. It can be used to quantitatively analyze various compounds and peptides about their functions or potentials to become useful drugs. One level is for dealing with the physicochemical properties of drug molecules, while the other level is for dealing with their structural fragments. The predictor has the self-learning and feedback features to automatically improve its accuracy. It is anticipated that 2L-PCA will become a very useful tool for timely providing various useful clues during the process of drug development.
Accuracy of a remote quantitative image analysis in the whole slide images.
Słodkowska, Janina; Markiewicz, Tomasz; Grala, Bartłomiej; Kozłowski, Wojciech; Papierz, Wielisław; Pleskacz, Katarzyna; Murawski, Piotr
2011-03-30
The rationale for choosing a remote quantitative method supporting a diagnostic decision requires some empirical studies and knowledge on scenarios including valid telepathology standards. The tumours of the central nervous system [CNS] are graded on the base of the morphological features and the Ki-67 labelling Index [Ki-67 LI]. Various methods have been applied for Ki-67 LI estimation. Recently we have introduced the Computerized Analysis of Medical Images [CAMI] software for an automated Ki-67 LI counting in the digital images. Aims of our study was to explore the accuracy and reliability of a remote assessment of Ki-67 LI with CAMI software applied to the whole slide images [WSI]. The WSI representing CNS tumours: 18 meningiomas and 10 oligodendrogliomas were stored on the server of the Warsaw University of Technology. The digital copies of entire glass slides were created automatically by the Aperio ScanScope CS with objective 20x or 40x. Aperio's Image Scope software provided functionality for a remote viewing of WSI. The Ki-67 LI assessment was carried on within 2 out of 20 selected fields of view (objective 40x) representing the highest labelling areas in each WSI. The Ki-67 LI counting was performed by 3 various methods: 1) the manual reading in the light microscope - LM, 2) the automated counting with CAMI software on the digital images - DI , and 3) the remote quantitation on the WSIs - as WSI method. The quality of WSIs and technical efficiency of the on-line system were analysed. The comparative statistical analysis was performed for the results obtained by 3 methods of Ki-67 LI counting. The preliminary analysis showed that in 18% of WSI the results of Ki-67 LI differed from those obtained in other 2 methods of counting when the quality of the glass slides was below the standard range. The results of our investigations indicate that the remote automated Ki-67 LI analysis performed with the CAMI algorithm on the whole slide images of meningiomas and oligodendrogliomas could be successfully used as an alternative method to the manual reading as well as to the digital images quantitation with CAMI software. According to our observation a need of a remote supervision/consultation and training for the effective use of remote quantitative analysis of WSI is necessary.
Optical Fourier filtering for whole lens assessment of progressive power lenses.
Spiers, T; Hull, C C
2000-07-01
Four binary filter designs for use in an optical Fourier filtering set-up were evaluated when taking quantitative measurements and when qualitatively mapping the power variation of progressive power lenses (PPLs). The binary filters tested were concentric ring, linear grating, grid and "chevron" designs. The chevron filter was considered best for quantitative measurements since it permitted a vernier acuity task to be used for measuring the fringe spacing, significantly reducing errors, and it also gave information on the polarity of the lens power. The linear grating filter was considered best for qualitatively evaluating the power variation. Optical Fourier filtering and a Nidek automatic focimeter were then used to measure the powers in the distance and near portions of five PPLs of differing design. Mean measurement error was 0.04 D with a maximum value of 0.13 D. Good qualitative agreement was found between the iso-cylinder plots provided by the manufacturer and the Fourier filter fringe patterns for the PPLs indicating that optical Fourier filtering provides the ability to map the power distribution across the entire lens aperture without the need for multiple point measurements. Arguments are presented that demonstrate that it should be possible to derive both iso-sphere and iso-cylinder plots from the binary filter patterns.
3D OCT imaging in clinical settings: toward quantitative measurements of retinal structures
NASA Astrophysics Data System (ADS)
Zawadzki, Robert J.; Fuller, Alfred R.; Zhao, Mingtao; Wiley, David F.; Choi, Stacey S.; Bower, Bradley A.; Hamann, Bernd; Izatt, Joseph A.; Werner, John S.
2006-02-01
The acquisition speed of current FD-OCT (Fourier Domain - Optical Coherence Tomography) instruments allows rapid screening of three-dimensional (3D) volumes of human retinas in clinical settings. To take advantage of this ability requires software used by physicians to be capable of displaying and accessing volumetric data as well as supporting post processing in order to access important quantitative information such as thickness maps and segmented volumes. We describe our clinical FD-OCT system used to acquire 3D data from the human retina over the macula and optic nerve head. B-scans are registered to remove motion artifacts and post-processed with customized 3D visualization and analysis software. Our analysis software includes standard 3D visualization techniques along with a machine learning support vector machine (SVM) algorithm that allows a user to semi-automatically segment different retinal structures and layers. Our program makes possible measurements of the retinal layer thickness as well as volumes of structures of interest, despite the presence of noise and structural deformations associated with retinal pathology. Our software has been tested successfully in clinical settings for its efficacy in assessing 3D retinal structures in healthy as well as diseased cases. Our tool facilitates diagnosis and treatment monitoring of retinal diseases.
DWI-based neural fingerprinting technology: a preliminary study on stroke analysis.
Ye, Chenfei; Ma, Heather Ting; Wu, Jun; Yang, Pengfei; Chen, Xuhui; Yang, Zhengyi; Ma, Jingbo
2014-01-01
Stroke is a common neural disorder in neurology clinics. Magnetic resonance imaging (MRI) has become an important tool to assess the neural physiological changes under stroke, such as diffusion weighted imaging (DWI) and diffusion tensor imaging (DTI). Quantitative analysis of MRI images would help medical doctors to localize the stroke area in the diagnosis in terms of structural information and physiological characterization. However, current quantitative approaches can only provide localization of the disorder rather than measure physiological variation of subtypes of ischemic stroke. In the current study, we hypothesize that each kind of neural disorder would have its unique physiological characteristics, which could be reflected by DWI images on different gradients. Based on this hypothesis, a DWI-based neural fingerprinting technology was proposed to classify subtypes of ischemic stroke. The neural fingerprint was constructed by the signal intensity of the region of interest (ROI) on the DWI images under different gradients. The fingerprint derived from the manually drawn ROI could classify the subtypes with accuracy 100%. However, the classification accuracy was worse when using semiautomatic and automatic method in ROI segmentation. The preliminary results showed promising potential of DWI-based neural fingerprinting technology in stroke subtype classification. Further studies will be carried out for enhancing the fingerprinting accuracy and its application in other clinical practices.
Howell, Peter; Sackin, Stevie; Glenn, Kazan
2007-01-01
This program of work is intended to develop automatic recognition procedures to locate and assess stuttered dysfluencies. This and the following article together, develop and test recognizers for repetitions and prolongations. The automatic recognizers classify the speech in two stages: In the first, the speech is segmented and in the second the segments are categorized. The units that are segmented are words. Here assessments by human judges on the speech of 12 children who stutter are described using a corresponding procedure. The accuracy of word boundary placement across judges, categorization of the words as fluent, repetition or prolongation, and duration of the different fluency categories are reported. These measures allow reliable instances of repetitions and prolongations to be selected for training and assessing the recognizers in the subsequent paper. PMID:9328878
Natural language processing of spoken diet records (SDRs).
Lacson, Ronilda; Long, William
2006-01-01
Dietary assessment is a fundamental aspect of nutritional evaluation that is essential for management of obesity as well as for assessing dietary impact on chronic diseases. Various methods have been used for dietary assessment including written records, 24-hour recalls, and food frequency questionnaires. The use of mobile phones to provide real-time dietary records provides potential advantages for accessibility, ease of use and automated documentation. However, understanding even a perfect transcript of spoken dietary records (SDRs) is challenging for people. This work presents a first step towards automatic analysis of SDRs. Our approach consists of four steps - identification of food items, identification of food quantifiers, classification of food quantifiers and temporal annotation. Our method enables automatic extraction of dietary information from SDRs, which in turn allows automated mapping to a Diet History Questionnaire dietary database. Our model has an accuracy of 90%. This work demonstrates the feasibility of automatically processing SDRs.
NASA Technical Reports Server (NTRS)
Vidulich, M. A.; Wickens, C. D.
1985-01-01
Dissociations between subjective workload assessments and performance were investigated. The difficulty of a Sternberg memory search task was manipulated by varying stimulus presentation rate, stimulus discernibility, value of good performance, and automaticity of performance. All Sternberg task conditions were performed both alone and concurrently with a tracking task. Bipolar subjective workload assessments were collected. Dissociations between workload and performance were found related to automaticity, presentation rate, and motivation level. The results were interpreted as supporting the hypothesis that the specific cognitive processes responsible for subjective assessments can differ from those responsible for performance. The potential contamination these dissociations could inflict on operational workload assessments is discussed.
Effects of 99mTc-TRODAT-1 drug template on image quantitative analysis
Yang, Bang-Hung; Chou, Yuan-Hwa; Wang, Shyh-Jen; Chen, Jyh-Cheng
2018-01-01
99mTc-TRODAT-1 is a type of drug that can bind to dopamine transporters in living organisms and is often used in SPCT imaging for observation of changes in the activity uptake of dopamine in the striatum. Therefore, it is currently widely used in studies on clinical diagnosis of Parkinson’s disease (PD) and movement-related disorders. In conventional 99mTc-TRODAT-1 SPECT image evaluation, visual inspection or manual selection of ROI for semiquantitative analysis is mainly used to observe and evaluate the degree of striatal defects. However, these methods are dependent on the subjective opinions of observers, which lead to human errors, have shortcomings such as long duration, increased effort, and have low reproducibility. To solve this problem, this study aimed to establish an automatic semiquantitative analytical method for 99mTc-TRODAT-1. This method combines three drug templates (one built-in SPECT template in SPM software and two self-generated MRI-based and HMPAO-based TRODAT-1 templates) for the semiquantitative analysis of the striatal phantom and clinical images. At the same time, the results of automatic analysis of the three templates were compared with results from a conventional manual analysis for examining the feasibility of automatic analysis and the effects of drug templates on automatic semiquantitative analysis results. After comparison, it was found that the MRI-based TRODAT-1 template generated from MRI images is the most suitable template for 99mTc-TRODAT-1 automatic semiquantitative analysis. PMID:29543874
Automatic Evaluations and Exercising: Systematic Review and Implications for Future Research.
Schinkoeth, Michaela; Antoniewicz, Franziska
2017-01-01
The general purpose of this systematic review was to summarize, structure and evaluate the findings on automatic evaluations of exercising. Studies were eligible for inclusion if they reported measuring automatic evaluations of exercising with an implicit measure and assessed some kind of exercise variable. Fourteen nonexperimental and six experimental studies (out of a total N = 1,928) were identified and rated by two independent reviewers. The main study characteristics were extracted and the grade of evidence for each study evaluated. First, results revealed a large heterogeneity in the applied measures to assess automatic evaluations of exercising and the exercise variables. Generally, small to large-sized significant relations between automatic evaluations of exercising and exercise variables were identified in the vast majority of studies. The review offers a systematization of the various examined exercise variables and prompts to differentiate more carefully between actually observed exercise behavior (proximal exercise indicator) and associated physiological or psychological variables (distal exercise indicator). Second, a lack of transparent reported reflections on the differing theoretical basis leading to the use of specific implicit measures was observed. Implicit measures should be applied purposefully, taking into consideration the individual advantages or disadvantages of the measures. Third, 12 studies were rated as providing first-grade evidence (lowest grade of evidence), five represent second-grade and three were rated as third-grade evidence. There is a dramatic lack of experimental studies, which are essential for illustrating the cause-effect relation between automatic evaluations of exercising and exercise and investigating under which conditions automatic evaluations of exercising influence behavior. Conclusions about the necessity of exercise interventions targeted at the alteration of automatic evaluations of exercising should therefore not be drawn too hastily.
Automatic Evaluations and Exercising: Systematic Review and Implications for Future Research
Schinkoeth, Michaela; Antoniewicz, Franziska
2017-01-01
The general purpose of this systematic review was to summarize, structure and evaluate the findings on automatic evaluations of exercising. Studies were eligible for inclusion if they reported measuring automatic evaluations of exercising with an implicit measure and assessed some kind of exercise variable. Fourteen nonexperimental and six experimental studies (out of a total N = 1,928) were identified and rated by two independent reviewers. The main study characteristics were extracted and the grade of evidence for each study evaluated. First, results revealed a large heterogeneity in the applied measures to assess automatic evaluations of exercising and the exercise variables. Generally, small to large-sized significant relations between automatic evaluations of exercising and exercise variables were identified in the vast majority of studies. The review offers a systematization of the various examined exercise variables and prompts to differentiate more carefully between actually observed exercise behavior (proximal exercise indicator) and associated physiological or psychological variables (distal exercise indicator). Second, a lack of transparent reported reflections on the differing theoretical basis leading to the use of specific implicit measures was observed. Implicit measures should be applied purposefully, taking into consideration the individual advantages or disadvantages of the measures. Third, 12 studies were rated as providing first-grade evidence (lowest grade of evidence), five represent second-grade and three were rated as third-grade evidence. There is a dramatic lack of experimental studies, which are essential for illustrating the cause-effect relation between automatic evaluations of exercising and exercise and investigating under which conditions automatic evaluations of exercising influence behavior. Conclusions about the necessity of exercise interventions targeted at the alteration of automatic evaluations of exercising should therefore not be drawn too hastily. PMID:29250022
Probst, Yasmine; Nguyen, Duc Thanh; Tran, Minh Khoi; Li, Wanqing
2015-07-27
Dietary assessment, while traditionally based on pen-and-paper, is rapidly moving towards automatic approaches. This study describes an Australian automatic food record method and its prototype for dietary assessment via the use of a mobile phone and techniques of image processing and pattern recognition. Common visual features including scale invariant feature transformation (SIFT), local binary patterns (LBP), and colour are used for describing food images. The popular bag-of-words (BoW) model is employed for recognizing the images taken by a mobile phone for dietary assessment. Technical details are provided together with discussions on the issues and future work.
Feng, Jingwen; Lin, Jie; Zhang, Pengquan; Yang, Songnan; Sa, Yu; Feng, Yuanming
2017-08-29
High-content screening is commonly used in studies of the DNA damage response. The double-strand break (DSB) is one of the most harmful types of DNA damage lesions. The conventional method used to quantify DSBs is γH2AX foci counting, which requires manual adjustment and preset parameters and is usually regarded as imprecise, time-consuming, poorly reproducible, and inaccurate. Therefore, a robust automatic alternative method is highly desired. In this manuscript, we present a new method for quantifying DSBs which involves automatic image cropping, automatic foci-segmentation and fluorescent intensity measurement. Furthermore, an additional function was added for standardizing the measurement of DSB response inhibition based on co-localization analysis. We tested the method with a well-known inhibitor of DSB response. The new method requires only one preset parameter, which effectively minimizes operator-dependent variations. Compared with conventional methods, the new method detected a higher percentage difference of foci formation between different cells, which can improve measurement accuracy. The effects of the inhibitor on DSB response were successfully quantified with the new method (p = 0.000). The advantages of this method in terms of reliability, automation and simplicity show its potential in quantitative fluorescence imaging studies and high-content screening for compounds and factors involved in DSB response.
NASA Astrophysics Data System (ADS)
Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien
2007-03-01
The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis.
An algorithm for automatic parameter adjustment for brain extraction in BrainSuite
NASA Astrophysics Data System (ADS)
Rajagopal, Gautham; Joshi, Anand A.; Leahy, Richard M.
2017-02-01
Brain Extraction (classification of brain and non-brain tissue) of MRI brain images is a crucial pre-processing step necessary for imaging-based anatomical studies of the human brain. Several automated methods and software tools are available for performing this task, but differences in MR image parameters (pulse sequence, resolution) and instrumentand subject-dependent noise and artefacts affect the performance of these automated methods. We describe and evaluate a method that automatically adapts the default parameters of the Brain Surface Extraction (BSE) algorithm to optimize a cost function chosen to reflect accurate brain extraction. BSE uses a combination of anisotropic filtering, Marr-Hildreth edge detection, and binary morphology for brain extraction. Our algorithm automatically adapts four parameters associated with these steps to maximize the brain surface area to volume ratio. We evaluate the method on a total of 109 brain volumes with ground truth brain masks generated by an expert user. A quantitative evaluation of the performance of the proposed algorithm showed an improvement in the mean (s.d.) Dice coefficient from 0.8969 (0.0376) for default parameters to 0.9509 (0.0504) for the optimized case. These results indicate that automatic parameter optimization can result in significant improvements in definition of the brain mask.
Automatic lumbar spine measurement in CT images
NASA Astrophysics Data System (ADS)
Mao, Yunxiang; Zheng, Dong; Liao, Shu; Peng, Zhigang; Yan, Ruyi; Liu, Junhua; Dong, Zhongxing; Gong, Liyan; Zhou, Xiang Sean; Zhan, Yiqiang; Fei, Jun
2017-03-01
Accurate lumbar spine measurement in CT images provides an essential way for quantitative spinal diseases analysis such as spondylolisthesis and scoliosis. In today's clinical workflow, the measurements are manually performed by radiologists and surgeons, which is time consuming and irreproducible. Therefore, automatic and accurate lumbar spine measurement algorithm becomes highly desirable. In this study, we propose a method to automatically calculate five different lumbar spine measurements in CT images. There are three main stages of the proposed method: First, a learning based spine labeling method, which integrates both the image appearance and spine geometry information, is used to detect lumbar and sacrum vertebrae in CT images. Then, a multiatlases based image segmentation method is used to segment each lumbar vertebra and the sacrum based on the detection result. Finally, measurements are derived from the segmentation result of each vertebra. Our method has been evaluated on 138 spinal CT scans to automatically calculate five widely used clinical spine measurements. Experimental results show that our method can achieve more than 90% success rates across all the measurements. Our method also significantly improves the measurement efficiency compared to manual measurements. Besides benefiting the routine clinical diagnosis of spinal diseases, our method also enables the large scale data analytics for scientific and clinical researches.
Image analysis for skeletal evaluation of carpal bones
NASA Astrophysics Data System (ADS)
Ko, Chien-Chuan; Mao, Chi-Wu; Lin, Chi-Jen; Sun, Yung-Nien
1995-04-01
The assessment of bone age is an important field to the pediatric radiology. It provides very important information for treatment and prediction of skeletal growth in a developing child. So far, various computerized algorithms for automatically assessing the skeletal growth have been reported. Most of these methods made attempt to analyze the phalangeal growth. The most fundamental step in these automatic measurement methods is the image segmentation that extracts bones from soft-tissue and background. These automatic segmentation methods of hand radiographs can roughly be categorized into two main approaches that are edge and region based methods. This paper presents a region-based carpal-bone segmentation approach. It is organized into four stages: contrast enhancement, moment-preserving thresholding, morphological processing, and region-growing labeling.
Multani, Jasjit Singh; Oermann, Eric Karl; Titano, Joseph; Mascitelli, Justin; Nicol, Kelly; Feng, Rui; Skovrlj, Branko; Pain, Margaret; Mocco, J D; Bederson, Joshua B; Costa, Anthony; Shrivastava, Raj
2017-08-01
There is no facile quantitative method for monitoring hydrocephalus (HCP). We propose quantitative computed tomography (CT) ventriculography (qCTV) as a novel computer vision tool for empirically assessing HCP in patients with subarachnoid hemorrhage (SAH). Twenty patients with SAH who were evaluated for ventriculoperitoneal shunt (VPS) placement were selected for inclusion. Ten patients with normal head computed tomography (CTH) findings were analyzed as negative controls. CTH scans were segmented both manually and automatically (by qCTV) to generate measures of ventricular volume. The median manually calculated ventricular volume was 36.1 cm 3 (interquartile range [IQR], 30-115 cm 3 ), which was similar to the median qCTV measured volume of 37.5 cm 3 (IQR, 32-118 cm 3 ) (P = 0.796). Patients undergoing VPS placement demonstrated an increase in median ventricular volume on qCTV from 21 cm 3 to 40 cm 3 on day T-2 and to 51 cm 3 by day 0, a change of 144%. This is in contrast to patients who did not require shunting, in whom median ventricular volume decreased from 16 cm 3 to 14 cm 3 on day T-2 and to 13 cm 3 by day 0, with an average overall volume decrease 19% (P = 0.001). The average change in ventricular volume predicted which patients would require VPS placement, successfully identifying 7 of 10 patients (P = 0.004). Using an optimized cutoff of a change in ventricular volume of 2.5 cm 3 identified all patients who went on to require VPS placement (10 of 10; P = 0.011). qCTV is a reliable means of quantifying ventricular volume and hydrocephalus. This technique offers a new tool for monitoring neurosurgical patients for hydrocephalus, and may be beneficial for use in future research studies, as well as in the routine care of patients with hydrocephalus. Copyright © 2017 Elsevier Inc. All rights reserved.
Johansson, Jarkko; Alakurtti, Kati; Joutsa, Juho; Tohka, Jussi; Ruotsalainen, Ulla; Rinne, Juha O
2016-10-01
The striatum is the primary target in regional C-raclopride-PET studies, and despite its small volume, it contains several functional and anatomical subregions. The outcome of the quantitative dopamine receptor study using C-raclopride-PET depends heavily on the quality of the region-of-interest (ROI) definition of these subregions. The aim of this study was to evaluate subregional analysis techniques because new approaches have emerged, but have not yet been compared directly. In this paper, we compared manual ROI delineation with several automatic methods. The automatic methods used either direct clustering of the PET image or individualization of chosen brain atlases on the basis of MRI or PET image normalization. State-of-the-art normalization methods and atlases were applied, including those provided in the FreeSurfer, Statistical Parametric Mapping8, and FSL software packages. Evaluation of the automatic methods was based on voxel-wise congruity with the manual delineations and the test-retest variability and reliability of the outcome measures using data from seven healthy male participants who were scanned twice with C-raclopride-PET on the same day. The results show that both manual and automatic methods can be used to define striatal subregions. Although most of the methods performed well with respect to the test-retest variability and reliability of binding potential, the smallest average test-retest variability and SEM were obtained using a connectivity-based atlas and PET normalization (test-retest variability=4.5%, SEM=0.17). The current state-of-the-art automatic ROI methods can be considered good alternatives for subjective and laborious manual segmentation in C-raclopride-PET studies.
NASA Astrophysics Data System (ADS)
Qin, Xulei; Cong, Zhibin; Fei, Baowei
2013-11-01
An automatic segmentation framework is proposed to segment the right ventricle (RV) in echocardiographic images. The method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform, a training model, and a localized region-based level set. First, the sparse matrix transform extracts main motion regions of the myocardium as eigen-images by analyzing the statistical information of the images. Second, an RV training model is registered to the eigen-images in order to locate the position of the RV. Third, the training model is adjusted and then serves as an optimized initialization for the segmentation of each image. Finally, based on the initializations, a localized, region-based level set algorithm is applied to segment both epicardial and endocardial boundaries in each echocardiograph. Three evaluation methods were used to validate the performance of the segmentation framework. The Dice coefficient measures the overall agreement between the manual and automatic segmentation. The absolute distance and the Hausdorff distance between the boundaries from manual and automatic segmentation were used to measure the accuracy of the segmentation. Ultrasound images of human subjects were used for validation. For the epicardial and endocardial boundaries, the Dice coefficients were 90.8 ± 1.7% and 87.3 ± 1.9%, the absolute distances were 2.0 ± 0.42 mm and 1.79 ± 0.45 mm, and the Hausdorff distances were 6.86 ± 1.71 mm and 7.02 ± 1.17 mm, respectively. The automatic segmentation method based on a sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.
NASA Astrophysics Data System (ADS)
Fredouille, Corinne; Pouchoulin, Gilles; Ghio, Alain; Revis, Joana; Bonastre, Jean-François; Giovanni, Antoine
2009-12-01
This paper addresses voice disorder assessment. It proposes an original back-and-forth methodology involving an automatic classification system as well as knowledge of the human experts (machine learning experts, phoneticians, and pathologists). The goal of this methodology is to bring a better understanding of acoustic phenomena related to dysphonia. The automatic system was validated on a dysphonic corpus (80 female voices), rated according to the GRBAS perceptual scale by an expert jury. Firstly, focused on the frequency domain, the classification system showed the interest of 0-3000 Hz frequency band for the classification task based on the GRBAS scale. Later, an automatic phonemic analysis underlined the significance of consonants and more surprisingly of unvoiced consonants for the same classification task. Submitted to the human experts, these observations led to a manual analysis of unvoiced plosives, which highlighted a lengthening of VOT according to the dysphonia severity validated by a preliminary statistical analysis.
Ueda, Jun; Yoshimura, Hajime; Shimizu, Keiji; Hino, Megumu; Kohara, Nobuo
2017-07-01
Visual and semi-quantitative assessments of 123 I-FP-CIT single-photon emission computed tomography (SPECT) are useful for the diagnosis of dopaminergic neurodegenerative diseases (dNDD), including Parkinson's disease, dementia with Lewy bodies, progressive supranuclear palsy, multiple system atrophy, and corticobasal degeneration. However, the diagnostic value of combined visual and semi-quantitative assessment in dNDD remains unclear. Among 239 consecutive patients with a newly diagnosed possible parkinsonian syndrome who underwent 123 I-FP-CIT SPECT in our medical center, 114 patients with a disease duration less than 7 years were diagnosed as dNDD with the established criteria or as non-dNDD according to clinical judgment. We retrospectively examined their clinical characteristics and visual and semi-quantitative assessments of 123 I-FP-CIT SPECT. The striatal binding ratio (SBR) was used as a semi-quantitative measure of 123 I-FP-CIT SPECT. We calculated the sensitivity and specificity of visual assessment alone, semi-quantitative assessment alone, and combined visual and semi-quantitative assessment for the diagnosis of dNDD. SBR was correlated with visual assessment. Some dNDD patients with a normal visual assessment had an abnormal SBR, and vice versa. There was no statistically significant difference between sensitivity of the diagnosis with visual assessment alone and semi-quantitative assessment alone (91.2 vs. 86.8%, respectively, p = 0.29). Combined visual and semi-quantitative assessment demonstrated superior sensitivity (96.7%) to visual assessment (p = 0.03) or semi-quantitative assessment (p = 0.003) alone with equal specificity. Visual and semi-quantitative assessments of 123 I-FP-CIT SPECT are helpful for the diagnosis of dNDD, and combined visual and semi-quantitative assessment shows superior sensitivity with equal specificity.
Wang, Kang; Ding, Changhai; Hannon, Michael J; Chen, Zhongshan; Kwoh, C Kent; Hunter, David J
2018-04-12
To determine if infrapatellar fat pad (IPFP) signal intensity (SI) measures are predictive of incident radiographic osteoarthritis (iROA) over 4 years in the OA Initiative (OAI) study. Case knees (n=355) defined by iROA were matched one-to-one by gender, age and radiographic status with control knees. T2-weighted MR images were assessed at P0 (the visit when iROA was found on radiograph), P-1 (1 year prior to P0) and baseline, and utilized to assess IPFP SI semi-automatically using MATLAB. Conditional logistic regression analyses were used to assess risk of iROA associated with IPFP SI alteration after adjustment for covariates. Participants were on average 60.2 years old, predominantly female (66.7%) and overweight (mean BMI: 28.3). Baseline IPFP measures including mean value and standard deviation of IPFP SI [Mean(IPFP), sDev(IPFP)] (HR, 95%CI: 5.2, 1.1 to 23.6 and 5.7, 2.2 to 14.5, respectively), mean value and standard deviation of IPFP high SI [Mean(H), sDev(H)] (HR, 95%CI: 3.3, 1.7 to 6.4 and 3.1, 1.3 to 7.7, respectively), median value and upper quartile value of IPFP high SI [Median(H), UQ(H)], and clustering effect of high SI [Clustering factor(H)] were associated with iROA during 4 years. All P-1 IPFP measures were associated with iROA after 12 months. P-0 IPFP SI measures were all associated with ROA. The quantitative segmentation of high signal in IPFP is confirming previous work based on semiquantitative assessment suggesting its predictive validity. The IPFP high SI alteration could be an important imaging biomarker to predict the occurrence of radiographic OA. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Hides, Julie A; Endicott, Timothy; Mendis, M Dilani; Stanton, Warren R
2016-07-01
To investigate whether motor control training alters automatic contraction of abdominal muscles in elite cricketers with low back pain (LBP) during performance of a simulated unilateral weight-bearing task. Clinical trial. 26 male elite-cricketers attended a 13-week cricket training camp. Prior to the camp, participants were allocated to a LBP or asymptomatic group. Real-time ultrasound imaging was used to assess automatic abdominal muscle response to axial loading. During the camp, the LBP group performed a staged motor control training program. Following the camp, the automatic response of the abdominal muscles was re-assessed. At pre-camp assessment, when participants were axially loaded with 25% of their own bodyweight, the LBP group showed a 15.5% thicker internal oblique (IO) muscle compared to the asymptomatic group (p = 0.009). The post-camp assessment showed that participants in the LBP group demonstrated less contraction of the IO muscle in response to axial loading compared with the asymptomatic group. A trend was found in the automatic recruitment pattern of the transversus abdominis (p = 0.08). Motor control training normalized excessive contraction of abdominal muscles in response to a low load task. This may be a useful strategy for rehabilitation of cricketers with LBP. Copyright © 2016 Elsevier Ltd. All rights reserved.
Automatic detection of spiculation of pulmonary nodules in computed tomography images
NASA Astrophysics Data System (ADS)
Ciompi, F.; Jacobs, C.; Scholten, E. T.; van Riel, S. J.; W. Wille, M. M.; Prokop, M.; van Ginneken, B.
2015-03-01
We present a fully automatic method for the assessment of spiculation of pulmonary nodules in low-dose Computed Tomography (CT) images. Spiculation is considered as one of the indicators of nodule malignancy and an important feature to assess in order to decide on a patient-tailored follow-up procedure. For this reason, lung cancer screening scenario would benefit from the presence of a fully automatic system for the assessment of spiculation. The presented framework relies on the fact that spiculated nodules mainly differ from non-spiculated ones in their morphology. In order to discriminate the two categories, information on morphology is captured by sampling intensity profiles along circular patterns on spherical surfaces centered on the nodule, in a multi-scale fashion. Each intensity profile is interpreted as a periodic signal, where the Fourier transform is applied, obtaining a spectrum. A library of spectra is created by clustering data via unsupervised learning. The centroids of the clusters are used to label back each spectrum in the sampling pattern. A compact descriptor encoding the nodule morphology is obtained as the histogram of labels along all the spherical surfaces and used to classify spiculated nodules via supervised learning. We tested our approach on a set of nodules from the Danish Lung Cancer Screening Trial (DLCST) dataset. Our results show that the proposed method outperforms other 3-D descriptors of morphology in the automatic assessment of spiculation.
Freiman, Moti; Nickisch, Hannes; Prevrhal, Sven; Schmitt, Holger; Vembar, Mani; Maurovich-Horvat, Pál; Donnelly, Patrick; Goshen, Liran
2017-03-01
The goal of this study was to assess the potential added benefit of accounting for partial volume effects (PVE) in an automatic coronary lumen segmentation algorithm that is used to determine the hemodynamic significance of a coronary artery stenosis from coronary computed tomography angiography (CCTA). Two sets of data were used in our work: (a) multivendor CCTA datasets of 18 subjects from the MICCAI 2012 challenge with automatically generated centerlines and 3 reference segmentations of 78 coronary segments and (b) additional CCTA datasets of 97 subjects with 132 coronary lesions that had invasive reference standard FFR measurements. We extracted the coronary artery centerlines for the 97 datasets by an automated software program followed by manual correction if required. An automatic machine-learning-based algorithm segmented the coronary tree with and without accounting for the PVE. We obtained CCTA-based FFR measurements using a flow simulation in the coronary trees that were generated by the automatic algorithm with and without accounting for PVE. We assessed the potential added value of PVE integration as a part of the automatic coronary lumen segmentation algorithm by means of segmentation accuracy using the MICCAI 2012 challenge framework and by means of flow simulation overall accuracy, sensitivity, specificity, negative and positive predictive values, and the receiver operated characteristic (ROC) area under the curve. We also evaluated the potential benefit of accounting for PVE in automatic segmentation for flow simulation for lesions that were diagnosed as obstructive based on CCTA which could have indicated a need for an invasive exam and revascularization. Our segmentation algorithm improves the maximal surface distance error by ~39% compared to previously published method on the 18 datasets from the MICCAI 2012 challenge with comparable Dice and mean surface distance. Results with and without accounting for PVE were comparable. In contrast, integrating PVE analysis into an automatic coronary lumen segmentation algorithm improved the flow simulation specificity from 0.6 to 0.68 with the same sensitivity of 0.83. Also, accounting for PVE improved the area under the ROC curve for detecting hemodynamically significant CAD from 0.76 to 0.8 compared to automatic segmentation without PVE analysis with invasive FFR threshold of 0.8 as the reference standard. Accounting for PVE in flow simulation to support the detection of hemodynamic significant disease in CCTA-based obstructive lesions improved specificity from 0.51 to 0.73 with same sensitivity of 0.83 and the area under the curve from 0.69 to 0.79. The improvement in the AUC was statistically significant (N = 76, Delong's test, P = 0.012). Accounting for the partial volume effects in automatic coronary lumen segmentation algorithms has the potential to improve the accuracy of CCTA-based hemodynamic assessment of coronary artery lesions. © 2017 American Association of Physicists in Medicine.
Self-assessing target with automatic feedback
Larkin, Stephen W.; Kramer, Robert L.
2004-03-02
A self assessing target with four quadrants and a method of use thereof. Each quadrant containing possible causes for why shots are going into that particular quadrant rather than the center mass of the target. Each possible cause is followed by a solution intended to help the marksman correct the problem causing the marksman to shoot in that particular area. In addition, the self assessing target contains possible causes for general shooting errors and solutions to the causes of the general shooting error. The automatic feedback with instant suggestions and corrections enables the shooter to improve their marksmanship.
Three-Dimensional Computer Graphics Brain-Mapping Project.
1987-03-15
NEUROQUANT . This package was directed towards quantitative microneuroanatomic data acquisition and analysis. Using this interface, image frames captured...populations of brains. This would have been aprohibitive task if done manually with a densitometer and film, due to user error and bias. NEUROQUANT functioned...of cells were of interest. NEUROQUANT is presently being implemented with a more fully automatic method of localizing the cell bodies directly
Norman, Berk; Pedoia, Valentina; Majumdar, Sharmila
2018-03-27
Purpose To analyze how automatic segmentation translates in accuracy and precision to morphology and relaxometry compared with manual segmentation and increases the speed and accuracy of the work flow that uses quantitative magnetic resonance (MR) imaging to study knee degenerative diseases such as osteoarthritis (OA). Materials and Methods This retrospective study involved the analysis of 638 MR imaging volumes from two data cohorts acquired at 3.0 T: (a) spoiled gradient-recalled acquisition in the steady state T1 ρ -weighted images and (b) three-dimensional (3D) double-echo steady-state (DESS) images. A deep learning model based on the U-Net convolutional network architecture was developed to perform automatic segmentation. Cartilage and meniscus compartments were manually segmented by skilled technicians and radiologists for comparison. Performance of the automatic segmentation was evaluated on Dice coefficient overlap with the manual segmentation, as well as by the automatic segmentations' ability to quantify, in a longitudinally repeatable way, relaxometry and morphology. Results The models produced strong Dice coefficients, particularly for 3D-DESS images, ranging between 0.770 and 0.878 in the cartilage compartments to 0.809 and 0.753 for the lateral meniscus and medial meniscus, respectively. The models averaged 5 seconds to generate the automatic segmentations. Average correlations between manual and automatic quantification of T1 ρ and T2 values were 0.8233 and 0.8603, respectively, and 0.9349 and 0.9384 for volume and thickness, respectively. Longitudinal precision of the automatic method was comparable with that of the manual one. Conclusion U-Net demonstrates efficacy and precision in quickly generating accurate segmentations that can be used to extract relaxation times and morphologic characterization and values that can be used in the monitoring and diagnosis of OA. © RSNA, 2018 Online supplemental material is available for this article.
Automatic food detection in egocentric images using artificial intelligence technology
USDA-ARS?s Scientific Manuscript database
Our objective was to develop an artificial intelligence (AI)-based algorithm which can automatically detect food items from images acquired by an egocentric wearable camera for dietary assessment. To study human diet and lifestyle, large sets of egocentric images were acquired using a wearable devic...
NASA Astrophysics Data System (ADS)
Chaisaowong, Kraisorn; Jiang, Mingze; Faltin, Peter; Merhof, Dorit; Eisenhawer, Christian; Gube, Monika; Kraus, Thomas
2016-03-01
Pleural thickenings are caused by asbestos exposure and may evolve into malignant pleural mesothelioma. An early diagnosis plays a key role towards an early treatment and an increased survival rate. Today, pleural thickenings are detected by visual inspection of CT data, which is time-consuming and underlies the physician's subjective judgment. A computer-assisted diagnosis system to automatically assess pleural thickenings has been developed, which includes not only a quantitative assessment with respect to size and location, but also enhances this information with an anatomical description, i.e. lung side (left, right), part of pleura (pars costalis, mediastinalis, diaphragmatica, spinalis), as well as vertical (upper, middle, lower) and horizontal (ventral, dorsal) position. For this purpose, a 3D anatomical model of the lung surface has been manually constructed as a 3D atlas. Three registration sub-steps including rigid, affine, and nonrigid registration align the input patient lung to the 3D anatomical atlas model of the lung surface. Finally, each detected pleural thickening is assigned a set of labels describing its anatomical properties. Through this added information, an enhancement to the existing computer-assisted diagnosis system is presented in order to assure a higher precision and reproducible assessment of pleural thickenings, aiming at the diagnosis of the pleural mesothelioma in its early stage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peña, Jaime A.; Damm, Timo; Bastgen, Jan
Purpose: Accurate noninvasive assessment of vertebral bone marrow fat fraction is important for diagnostic assessment of a variety of disorders and therapies known to affect marrow composition. Moreover, it provides a means to correct fat-induced bias of single energy quantitative computed tomography (QCT) based bone mineral density (BMD) measurements. The authors developed new segmentation and calibration methods to obtain quantitative surrogate measures of marrow-fat density in the axial skeleton. Methods: The authors developed and tested two high resolution-QCT (HR-QCT) based methods which permit segmentation of bone voids in between trabeculae hypothesizing that they are representative of bone marrow space. Themore » methods permit calculation of marrow content in units of mineral equivalent marrow density (MeMD). The first method is based on global thresholding and peeling (GTP) to define a volume of interest away from the transition between trabecular bone and marrow. The second method, morphological filtering (MF), uses spherical elements of different radii (0.1–1.2 mm) and automatically places them in between trabeculae to identify regions with large trabecular interspace, the bone-void space. To determine their performance, data were compared ex vivo to high-resolution peripheral CT (HR-pQCT) images as the gold-standard. The performance of the methods was tested on a set of excised human vertebrae with intact bone marrow tissue representative of an elderly population with low BMD. Results: 86% (GTP) and 87% (MF) of the voxels identified as true marrow space on HR-pQCT images were correctly identified on HR-QCT images and thus these volumes of interest can be considered to be representative of true marrow space. Within this volume, MeMD was estimated with residual errors of 4.8 mg/cm{sup 3} corresponding to accuracy errors in fat fraction on the order of 5% both for GTP and MF methods. Conclusions: The GTP and MF methods on HR-QCT images permit noninvasive localization and densitometric assessment of marrow fat with residual accuracy errors sufficient to study disorders and therapies known to affect bone marrow composition. Additionally, the methods can be used to correct BMD for fat induced bias. Application and testing in vivo and in longitudinal studies are warranted to determine the clinical performance and value of these methods.« less
NASA Astrophysics Data System (ADS)
Zhenying, Xu; Jiandong, Zhu; Qi, Zhang; Yamba, Philip
2018-06-01
Metallographic microscopy shows that the vast majority of metal materials are composed of many small grains; the grain size of a metal is important for determining the tensile strength, toughness, plasticity, and other mechanical properties. In order to quantitatively evaluate grain size in metals, grain boundaries must be identified in metallographic images. Based on the phenomenon of grain boundary blurring or disconnection in metallographic images, this study develops an algorithm based on regional separation for automatically extracting grain boundaries by an improved mean shift method. Experimental observation shows that the grain boundaries obtained by the proposed algorithm are highly complete and accurate. This research has practical value because the proposed algorithm is suitable for grain boundary extraction from most metallographic images.
An accurate method of extracting fat droplets in liver images for quantitative evaluation
NASA Astrophysics Data System (ADS)
Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie
2015-03-01
The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.
Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak
2013-01-01
The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Brain tumor segmentation in MR slices using improved GrowCut algorithm
NASA Astrophysics Data System (ADS)
Ji, Chunhong; Yu, Jinhua; Wang, Yuanyuan; Chen, Liang; Shi, Zhifeng; Mao, Ying
2015-12-01
The detection of brain tumor from MR images is very significant for medical diagnosis and treatment. However, the existing methods are mostly based on manual or semiautomatic segmentation which are awkward when dealing with a large amount of MR slices. In this paper, a new fully automatic method for the segmentation of brain tumors in MR slices is presented. Based on the hypothesis of the symmetric brain structure, the method improves the interactive GrowCut algorithm by further using the bounding box algorithm in the pre-processing step. More importantly, local reflectional symmetry is used to make up the deficiency of the bounding box method. After segmentation, 3D tumor image is reconstructed. We evaluate the accuracy of the proposed method on MR slices with synthetic tumors and actual clinical MR images. Result of the proposed method is compared with the actual position of simulated 3D tumor qualitatively and quantitatively. In addition, our automatic method produces equivalent performance as manual segmentation and the interactive GrowCut with manual interference while providing fully automatic segmentation.
Automatic detection and measurement of viral replication compartments by ellipse adjustment
Garcés, Yasel; Guerrero, Adán; Hidalgo, Paloma; López, Raul Eduardo; Wood, Christopher D.; Gonzalez, Ramón A.; Rendón-Mancha, Juan Manuel
2016-01-01
Viruses employ a variety of strategies to hijack cellular activities through the orchestrated recruitment of macromolecules to specific virus-induced cellular micro-environments. Adenoviruses (Ad) and other DNA viruses induce extensive reorganization of the cell nucleus and formation of nuclear Replication Compartments (RCs), where the viral genome is replicated and expressed. In this work an automatic algorithm designed for detection and segmentation of RCs using ellipses is presented. Unlike algorithms available in the literature, this approach is deterministic, automatic, and can adjust multiple RCs using ellipses. The proposed algorithm is non iterative, computationally efficient and is invariant to affine transformations. The method was validated over both synthetic images and more than 400 real images of Ad-infected cells at various timepoints of the viral replication cycle obtaining relevant information about the biogenesis of adenoviral RCs. As proof of concept the algorithm was then used to quantitatively compare RCs in cells infected with the adenovirus wild type or an adenovirus mutant that is null for expression of a viral protein that is known to affect activities associated with RCs that result in deficient viral progeny production. PMID:27819325
Automatic detection and measurement of viral replication compartments by ellipse adjustment
NASA Astrophysics Data System (ADS)
Garcés, Yasel; Guerrero, Adán; Hidalgo, Paloma; López, Raul Eduardo; Wood, Christopher D.; Gonzalez, Ramón A.; Rendón-Mancha, Juan Manuel
2016-11-01
Viruses employ a variety of strategies to hijack cellular activities through the orchestrated recruitment of macromolecules to specific virus-induced cellular micro-environments. Adenoviruses (Ad) and other DNA viruses induce extensive reorganization of the cell nucleus and formation of nuclear Replication Compartments (RCs), where the viral genome is replicated and expressed. In this work an automatic algorithm designed for detection and segmentation of RCs using ellipses is presented. Unlike algorithms available in the literature, this approach is deterministic, automatic, and can adjust multiple RCs using ellipses. The proposed algorithm is non iterative, computationally efficient and is invariant to affine transformations. The method was validated over both synthetic images and more than 400 real images of Ad-infected cells at various timepoints of the viral replication cycle obtaining relevant information about the biogenesis of adenoviral RCs. As proof of concept the algorithm was then used to quantitatively compare RCs in cells infected with the adenovirus wild type or an adenovirus mutant that is null for expression of a viral protein that is known to affect activities associated with RCs that result in deficient viral progeny production.
Spectral saliency via automatic adaptive amplitude spectrum analysis
NASA Astrophysics Data System (ADS)
Wang, Xiaodong; Dai, Jialun; Zhu, Yafei; Zheng, Haiyong; Qiao, Xiaoyan
2016-03-01
Suppressing nonsalient patterns by smoothing the amplitude spectrum at an appropriate scale has been shown to effectively detect the visual saliency in the frequency domain. Different filter scales are required for different types of salient objects. We observe that the optimal scale for smoothing amplitude spectrum shares a specific relation with the size of the salient region. Based on this observation and the bottom-up saliency detection characterized by spectrum scale-space analysis for natural images, we propose to detect visual saliency, especially with salient objects of different sizes and locations via automatic adaptive amplitude spectrum analysis. We not only provide a new criterion for automatic optimal scale selection but also reserve the saliency maps corresponding to different salient objects with meaningful saliency information by adaptive weighted combination. The performance of quantitative and qualitative comparisons is evaluated by three different kinds of metrics on the four most widely used datasets and one up-to-date large-scale dataset. The experimental results validate that our method outperforms the existing state-of-the-art saliency models for predicting human eye fixations in terms of accuracy and robustness.
System design for 3D wound imaging using low-cost mobile devices
NASA Astrophysics Data System (ADS)
Sirazitdinova, Ekaterina; Deserno, Thomas M.
2017-03-01
The state-of-the art method of wound assessment is a manual, imprecise and time-consuming procedure. Per- formed by clinicians, it has limited reproducibility and accuracy, large time consumption and high costs. Novel technologies such as laser scanning microscopy, multi-photon microscopy, optical coherence tomography and hyper-spectral imaging, as well as devices relying on the structured light sensors, make accurate wound assessment possible. However, such methods have limitations due to high costs and may lack portability and availability. In this paper, we present a low-cost wound assessment system and architecture for fast and accurate cutaneous wound assessment using inexpensive consumer smartphone devices. Computer vision techniques are applied either on the device or the server to reconstruct wounds in 3D as dense models, which are generated from images taken with a built-in single camera of a smartphone device. The system architecture includes imaging (smartphone), processing (smartphone or PACS) and storage (PACS) devices. It supports tracking over time by alignment of 3D models, color correction using a reference color card placed into the scene and automatic segmentation of wound regions. Using our system, we are able to detect and document quantitative characteristics of chronic wounds, including size, depth, volume, rate of healing, as well as qualitative characteristics as color, presence of necrosis and type of involved tissue.
NASA Astrophysics Data System (ADS)
Hu, Li; Zhao, Nanjing; Liu, Wenqing; Meng, Deshuo; Fang, Li; Wang, Yin; Yu, Yang; Ma, Mingjun
2015-08-01
Heavy metals in water can be deposited on graphite flakes, which can be used as an enrichment method for laser-induced breakdown spectroscopy (LIBS) and is studied in this paper. The graphite samples were prepared with an automatic device, which was composed of a loading and unloading module, a quantitatively adding solution module, a rapid heating and drying module and a precise rotating module. The experimental results showed that the sample preparation methods had no significant effect on sample distribution and the LIBS signal accumulated in 20 pulses was stable and repeatable. With an increasing amount of the sample solution on the graphite flake, the peak intensity at Cu I 324.75 nm accorded with the exponential function with a correlation coefficient of 0.9963 and the background intensity remained unchanged. The limit of detection (LOD) was calculated through linear fitting of the peak intensity versus the concentration. The LOD decreased rapidly with an increasing amount of sample solution until the amount exceeded 20 mL and the correlation coefficient of exponential function fitting was 0.991. The LOD of Pb, Ni, Cd, Cr and Zn after evaporating different amounts of sample solution on the graphite flakes was measured and the variation tendency of their LOD with sample solution amounts was similar to the tendency for Cu. The experimental data and conclusions could provide a reference for automatic sample preparation and heavy metal in situ detection. supported by National Natural Science Foundation of China (No. 60908018), National High Technology Research and Development Program of China (No. 2013AA065502) and Anhui Province Outstanding Youth Science Fund of China (No. 1108085J19)
Fully automatic and precise data analysis developed for time-of-flight mass spectrometry.
Meyer, Stefan; Riedo, Andreas; Neuland, Maike B; Tulej, Marek; Wurz, Peter
2017-09-01
Scientific objectives of current and future space missions are focused on the investigation of the origin and evolution of the solar system with the particular emphasis on habitability and signatures of past and present life. For in situ measurements of the chemical composition of solid samples on planetary surfaces, the neutral atmospheric gas and the thermal plasma of planetary atmospheres, the application of mass spectrometers making use of time-of-flight mass analysers is a technique widely used. However, such investigations imply measurements with good statistics and, thus, a large amount of data to be analysed. Therefore, faster and especially robust automated data analysis with enhanced accuracy is required. In this contribution, an automatic data analysis software, which allows fast and precise quantitative data analysis of time-of-flight mass spectrometric data, is presented and discussed in detail. A crucial part of this software is a robust and fast peak finding algorithm with a consecutive numerical integration method allowing precise data analysis. We tested our analysis software with data from different time-of-flight mass spectrometers and different measurement campaigns thereof. The quantitative analysis of isotopes, using automatic data analysis, yields results with an accuracy of isotope ratios up to 100 ppm for a signal-to-noise ratio (SNR) of 10 4 . We show that the accuracy of isotope ratios is in fact proportional to SNR -1 . Furthermore, we observe that the accuracy of isotope ratios is inversely proportional to the mass resolution. Additionally, we show that the accuracy of isotope ratios is depending on the sample width T s by T s 0.5 . Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Criteria for the assessment of analyser practicability
Biosca, C.; Galimany, R.
1993-01-01
This article lists the theoretical criteria that need to be considered to assess the practicability of an automatic analyser. Two essential sets of criteria should be taken into account when selecting an automatic analyser: ‘reliability’ and ‘practicability’. Practibility covers the features that provide information about the suitability of an analyser for specific working conditions. These practibility criteria are classsified in this article and include the environment; work organization; versatility and flexibility; safely controls; staff training; maintenance and operational costs. PMID:18924972
Probst, Yasmine; Nguyen, Duc Thanh; Tran, Minh Khoi; Li, Wanqing
2015-01-01
Dietary assessment, while traditionally based on pen-and-paper, is rapidly moving towards automatic approaches. This study describes an Australian automatic food record method and its prototype for dietary assessment via the use of a mobile phone and techniques of image processing and pattern recognition. Common visual features including scale invariant feature transformation (SIFT), local binary patterns (LBP), and colour are used for describing food images. The popular bag-of-words (BoW) model is employed for recognizing the images taken by a mobile phone for dietary assessment. Technical details are provided together with discussions on the issues and future work. PMID:26225994
Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R.
2016-01-01
Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM®) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard. PMID:27257542
ERIC Educational Resources Information Center
Beale, Ivan L.
2005-01-01
Computer assisted learning (CAL) can involve a computerised intelligent learning environment, defined as an environment capable of automatically, dynamically and continuously adapting to the learning context. One aspect of this adaptive capability involves automatic adjustment of instructional procedures in response to each learner's performance,…
Graphonomics, Automaticity and Handwriting Assessment
ERIC Educational Resources Information Center
Tucha, Oliver; Tucha, Lara; Lange, Klaus W.
2008-01-01
A recent review of handwriting research in "Literacy" concluded that current curricula of handwriting education focus too much on writing style and neatness and neglect the aspect of handwriting automaticity. This conclusion is supported by evidence in the field of graphonomic research, where a range of experiments have been used to investigate…
Automatic Conceptual Encoding of Printed Verbal Material: Assessment of Population Differences.
ERIC Educational Resources Information Center
Kee, Daniel W.; And Others
1984-01-01
The release from proactive interference task as used to investigate categorical encoding of items. Low socioeconomic status Black and middle socioeconomic status White children were compared. Conceptual encoding differences between these populations were not detected in automatic conceptual encoding but were detected when the free recall method…
ERIC Educational Resources Information Center
Salton, G.
1980-01-01
Summarizes studies of pseudoclassification, a process of utilizing user relevance assessments of certain documents with respect to certain queries to build term classes designed to retrieve relevant documents. Conclusions are reached concerning the effectiveness and feasibility of constructing term classifications based on human relevance…
Automatically Assessing Lexical Sophistication: Indices, Tools, Findings, and Application
ERIC Educational Resources Information Center
Kyle, Kristopher; Crossley, Scott A.
2015-01-01
This study explores the construct of lexical sophistication and its applications for measuring second language lexical and speaking proficiency. In doing so, the study introduces the Tool for the Automatic Analysis of LExical Sophistication (TAALES), which calculates text scores for 135 classic and newly developed lexical indices related to word…
AN ASSESSMENT OF AUTOMATIC SEWER FLOW SAMPLERS (EPA/600/2-75/065)
A brief review of the characteristics of storm and combined sewer flows is given followed by a general discussion of the purposes for and requirements of a sampling program. The desirable characteristics of automatic sampling equipment are set forth and problem areas are outlined...
Feedback Improvement in Automatic Program Evaluation Systems
ERIC Educational Resources Information Center
Skupas, Bronius
2010-01-01
Automatic program evaluation is a way to assess source program files. These techniques are used in learning management environments, programming exams and contest systems. However, use of automated program evaluation encounters problems: some evaluations are not clear for the students and the system messages do not show reasons for lost points.…
Acquisition of Automatic Imitation Is Sensitive to Sensorimotor Contingency
ERIC Educational Resources Information Center
Cook, Richard; Press, Clare; Dickinson, Anthony; Heyes, Cecilia
2010-01-01
The associative sequence learning model proposes that the development of the mirror system depends on the same mechanisms of associative learning that mediate Pavlovian and instrumental conditioning. To test this model, two experiments used the reduction of automatic imitation through incompatible sensorimotor training to assess whether mirror…
Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Oliva, Doretta; Campodonico, Francesca; Groeneweg, Jop
2003-01-01
We assessed the effects of automatically delivered stimulation on the activity level and mood (indices of happiness) of three students with multiple disabilities during their use of a stepper and a stationary bicycle. The stimulation involved a pool of favorite stimulus events that were delivered automatically, through an electronic control system, while the students were active in using the aforementioned exercise tools. Data showed that stimulation had an overall positive impact, but this was not evident on both measures (i.e., level of activity and indices of happiness) or with both exercise tools across students. These findings are discussed in relation to the outcome of an earlier study in the area by the same authors and in terms of practical implications for daily contexts.
NASA Astrophysics Data System (ADS)
Chaganti, Shikha; Nelson, Katrina; Mundy, Kevin; Luo, Yifu; Harrigan, Robert L.; Damon, Steve; Fabbri, Daniel; Mawn, Louise; Landman, Bennett
2016-03-01
Pathologies of the optic nerve and orbit impact millions of Americans and quantitative assessment of the orbital structures on 3-D imaging would provide objective markers to enhance diagnostic accuracy, improve timely intervention, and eventually preserve visual function. Recent studies have shown that the multi-atlas methodology is suitable for identifying orbital structures, but challenges arise in the identification of the individual extraocular rectus muscles that control eye movement. This is increasingly problematic in diseased eyes, where these muscles often appear to fuse at the back of the orbit (at the resolution of clinical computed tomography imaging) due to inflammation or crowding. We propose the use of Kalman filters to track the muscles in three-dimensions to refine multi-atlas segmentation and resolve ambiguity due to imaging resolution, noise, and artifacts. The purpose of our study is to investigate a method of automatically generating orbital metrics from CT imaging and demonstrate the utility of the approach by correlating structural metrics of the eye orbit with clinical data and visual function measures in subjects with thyroid eye disease. The pilot study demonstrates that automatically calculated orbital metrics are strongly correlated with several clinical characteristics. Moreover, it is shown that the superior, inferior, medial and lateral rectus muscles obtained using Kalman filters are each correlated with different categories of functional deficit. These findings serve as foundation for further investigation in the use of CT imaging in the study, analysis and diagnosis of ocular diseases, specifically thyroid eye disease.
Lu, Chao; Chelikani, Sudhakar; Papademetris, Xenophon; Knisely, Jonathan P.; Milosevic, Michael F.; Chen, Zhe; Jaffray, David A.; Staib, Lawrence H.; Duncan, James S.
2011-01-01
External beam radiotherapy (EBRT) has become the preferred options for non-surgical treatment of prostate cancer and cervix cancer. In order to deliver higher doses to cancerous regions within these pelvic structures (i.e. prostate or cervix) while maintaining or lowering the doses to surrounding non-cancerous regions, it is critical to account for setup variation, organ motion, anatomical changes due to treatment and intra-fraction motion. In previous work, manual segmentation of the soft tissues is performed and then images are registered based on the manual segmentation. In this paper, we present an integrated automatic approach to multiple organ segmentation and nonrigid constrained registration, which can achieve these two aims simultaneously. The segmentation and registration steps are both formulated using a Bayesian framework, and they constrain each other using an iterative conditional model strategy. We also propose a new strategy to assess cumulative actual dose for this novel integrated algorithm, in order to both determine whether the intended treatment is being delivered and, potentially, whether or not a plan should be adjusted for future treatment fractions. Quantitative results show that the automatic segmentation produced results that have an accuracy comparable to manual segmentation, while the registration part significantly outperforms both rigid and non-rigid registration. Clinical application and evaluation of dose delivery show the superiority of proposed method to the procedure currently used in clinical practice, i.e. manual segmentation followed by rigid registration. PMID:21646038
Estimating psycho-physiological state of a human by speech analysis
NASA Astrophysics Data System (ADS)
Ronzhin, A. L.
2005-05-01
Adverse effects of intoxication, fatigue and boredom could degrade performance of highly trained operators of complex technical systems with potentially catastrophic consequences. Existing physiological fitness for duty tests are time consuming, costly, invasive, and highly unpopular. Known non-physiological tests constitute a secondary task and interfere with the busy workload of the tested operator. Various attempts to assess the current status of the operator by processing of "normal operational data" often lead to excessive amount of computations, poorly justified metrics, and ambiguity of results. At the same time, speech analysis presents a natural, non-invasive approach based upon well-established efficient data processing. In addition, it supports both behavioral and physiological biometric. This paper presents an approach facilitating robust speech analysis/understanding process in spite of natural speech variability and background noise. Automatic speech recognition is suggested as a technique for the detection of changes in the psycho-physiological state of a human that typically manifest themselves by changes of characteristics of voice tract and semantic-syntactic connectivity of conversation. Preliminary tests have confirmed that the statistically significant correlation between the error rate of automatic speech recognition and the extent of alcohol intoxication does exist. In addition, the obtained data allowed exploring some interesting correlations and establishing some quantitative models. It is proposed to utilize this approach as a part of fitness for duty test and compare its efficiency with analyses of iris, face geometry, thermography and other popular non-invasive biometric techniques.
Design of a monitoring network over France in case of a radiological accidental release
NASA Astrophysics Data System (ADS)
Abida, Rachid; Bocquet, Marc; Vercauteren, Nikki; Isnard, Olivier
The Institute of Radiation Protection and Nuclear Safety (France) is planning the set-up of an automatic nuclear aerosol monitoring network over the French territory. Each of the stations will be able to automatically sample the air aerosol content and provide activity concentration measurements on several radionuclides. This should help monitor the French and neighbouring countries nuclear power plants set. It would help evaluate the impact of a radiological incident occurring at one of these nuclear facilities. This paper is devoted to the spatial design of such a network. Here, any potential network is judged on its ability to extrapolate activity concentrations measured on the network stations over the whole domain. The performance of a network is quantitatively assessed through a cost function that measures the discrepancy between the extrapolation and the true concentration fields. These true fields are obtained through the computation of a database of dispersion accidents over one year of meteorology and originating from 20 French nuclear sites. A close to optimal network is then looked for using a simulated annealing optimisation. The results emphasise the importance of the cost function in the design of a network aimed at monitoring an accidental dispersion. Several choices of norm used in the cost function are studied and give way to different designs. The influence of the number of stations is discussed. A comparison with a purely geometric approach which does not involve simulations with a chemistry-transport model is performed.
Motion-aware stroke volume quantification in 4D PC-MRI data of the human aorta.
Köhler, Benjamin; Preim, Uta; Grothoff, Matthias; Gutberlet, Matthias; Fischbach, Katharina; Preim, Bernhard
2016-02-01
4D PC-MRI enables the noninvasive measurement of time-resolved, three-dimensional blood flow data that allow quantification of the hemodynamics. Stroke volumes are essential to assess the cardiac function and evolution of different cardiovascular diseases. The calculation depends on the wall position and vessel orientation, which both change during the cardiac cycle due to the heart muscle contraction and the pumped blood. However, current systems for the quantitative 4D PC-MRI data analysis neglect the dynamic character and instead employ a static 3D vessel approximation. We quantify differences between stroke volumes in the aorta obtained with and without consideration of its dynamics. We describe a method that uses the approximating 3D segmentation to automatically initialize segmentation algorithms that require regions inside and outside the vessel for each temporal position. This enables the use of graph cuts to obtain 4D segmentations, extract vessel surfaces including centerlines for each temporal position and derive motion information. The stroke volume quantification is compared using measuring planes in static (3D) vessels, planes with fixed angulation inside dynamic vessels (this corresponds to the common 2D PC-MRI) and moving planes inside dynamic vessels. Seven datasets with different pathologies such as aneurysms and coarctations were evaluated in close collaboration with radiologists. Compared to the experts' manual stroke volume estimations, motion-aware quantification performs, on average, 1.57% better than calculations without motion consideration. The mean difference between stroke volumes obtained with the different methods is 7.82%. Automatically obtained 4D segmentations overlap by 85.75% with manually generated ones. Incorporating motion information in the stroke volume quantification yields slight but not statistically significant improvements. The presented method is feasible for the clinical routine, since computation times are low and essential parts run fully automatically. The 4D segmentations can be used for other algorithms as well. The simultaneous visualization and quantification may support the understanding and interpretation of cardiac blood flow.
Creating an automated tool for measuring software cohesion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tutton, J.M.; Zucconi, L.
1994-05-06
Program modules with high complexity tend to be more error prone and more difficult to understand. These factors increase maintenance and enhancement costs. Hence, a tool that can help programmers determine a key factor in module complexity should be very useful. Our goal is to create a software tool that will automatically give a quantitative measure of the cohesiveness of a given module, and hence give us an estimate of the {open_quotes}maintainability{close_quotes} of that module. The Tool will use a metric developed by Professors Linda M. Ott and James M. Bieman. The Ott/Bieman metric gives quantitative measures that indicate themore » degree of functional cohesion using abstract data slices.« less
Some selected quantitative methods of thermal image analysis in Matlab.
Koprowski, Robert
2016-05-01
The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Cheng, Wing-Chi; Yau, Tsan-Sang; Wong, Ming-Kei; Chan, Lai-Ping; Mok, Vincent King-Kuen
2006-10-16
A rapid urinalysis system based on SPE-LC-MS/MS with an in-house post-analysis data management system has been developed for the simultaneous identification and semi-quantitation of opiates (morphine, codeine), methadone, amphetamines (amphetamine, methylamphetamine (MA), 3,4-methylenedioxyamphetamine (MDA) and 3,4-methylenedioxymethamphetamine (MDMA)), 11-benzodiazepines or their metabolites and ketamine. The urine samples are subjected to automated solid phase extraction prior to analysis by LC-MS (Finnigan Surveyor LC connected to a Finnigan LCQ Advantage) fitted with an Alltech Rocket Platinum EPS C-18 column. With a single point calibration at the cut-off concentration for each analyte, simultaneous identification and semi-quantitation for the above mentioned drugs can be achieved in a 10 min run per urine sample. A computer macro-program package was developed to automatically retrieve appropriate data from the analytical data files, compare results with preset values (such as cut-off concentrations, MS matching scores) of each drug being analyzed and generate user-defined Excel reports to indicate all positive and negative results in batch-wise manner for ease of checking. The final analytical results are automatically copied into an Access database for report generation purposes. Through the use of automation in sample preparation, simultaneous identification and semi-quantitation by LC-MS/MS and a tailored made post-analysis data management system, this new urinalysis system significantly improves the quality of results, reduces the post-data treatment time, error due to data transfer and is suitable for high-throughput laboratory in batch-wise operation.
Vasconcelos, Maria J M; Ventura, Sandra M R; Freitas, Diamantino R S; Tavares, João Manuel R S
2012-03-01
The morphological and dynamic characterisation of the vocal tract during speech production has been gaining greater attention due to the motivation of the latest improvements in magnetic resonance (MR) imaging; namely, with the use of higher magnetic fields, such as 3.0 Tesla. In this work, the automatic study of the vocal tract from 3.0 Tesla MR images was assessed through the application of statistical deformable models. Therefore, the primary goal focused on the analysis of the shape of the vocal tract during the articulation of European Portuguese sounds, followed by the evaluation of the results concerning the automatic segmentation, i.e. identification of the vocal tract in new MR images. In what concerns speech production, this is the first attempt to automatically characterise and reconstruct the vocal tract shape of 3.0 Tesla MR images by using deformable models; particularly, by using active and appearance shape models. The achieved results clearly evidence the adequacy and advantage of the automatic analysis of the 3.0 Tesla MR images of these deformable models in order to extract the vocal tract shape and assess the involved articulatory movements. These achievements are mostly required, for example, for a better knowledge of speech production, mainly of patients suffering from articulatory disorders, and to build enhanced speech synthesizer models.
Automated Detection of Actinic Keratoses in Clinical Photographs
Hames, Samuel C.; Sinnya, Sudipta; Tan, Jean-Marie; Morze, Conrad; Sahebian, Azadeh; Soyer, H. Peter; Prow, Tarl W.
2015-01-01
Background Clinical diagnosis of actinic keratosis is known to have intra- and inter-observer variability, and there is currently no non-invasive and objective measure to diagnose these lesions. Objective The aim of this pilot study was to determine if automatically detecting and circumscribing actinic keratoses in clinical photographs is feasible. Methods Photographs of the face and dorsal forearms were acquired in 20 volunteers from two groups: the first with at least on actinic keratosis present on the face and each arm, the second with no actinic keratoses. The photographs were automatically analysed using colour space transforms and morphological features to detect erythema. The automated output was compared with a senior consultant dermatologist’s assessment of the photographs, including the intra-observer variability. Performance was assessed by the correlation between total lesions detected by automated method and dermatologist, and whether the individual lesions detected were in the same location as the dermatologist identified lesions. Additionally, the ability to limit false positives was assessed by automatic assessment of the photographs from the no actinic keratosis group in comparison to the high actinic keratosis group. Results The correlation between the automatic and dermatologist counts was 0.62 on the face and 0.51 on the arms, compared to the dermatologist’s intra-observer variation of 0.83 and 0.93 for the same. Sensitivity of automatic detection was 39.5% on the face, 53.1% on the arms. Positive predictive values were 13.9% on the face and 39.8% on the arms. Significantly more lesions (p<0.0001) were detected in the high actinic keratosis group compared to the no actinic keratosis group. Conclusions The proposed method was inferior to assessment by the dermatologist in terms of sensitivity and positive predictive value. However, this pilot study used only a single simple feature and was still able to achieve sensitivity of detection of 53.1% on the arms.This suggests that image analysis is a feasible avenue of investigation for overcoming variability in clinical assessment. Future studies should focus on more sophisticated features to improve sensitivity for actinic keratoses without erythema and limit false positives associated with the anatomical structures on the face. PMID:25615930
Wang, Jinke; Cheng, Yuanzhi; Guo, Changyong; Wang, Yadong; Tamura, Shinichi
2016-05-01
Propose a fully automatic 3D segmentation framework to segment liver on challenging cases that contain the low contrast of adjacent organs and the presence of pathologies from abdominal CT images. First, all of the atlases are weighted in the selected training datasets by calculating the similarities between the atlases and the test image to dynamically generate a subject-specific probabilistic atlas for the test image. The most likely liver region of the test image is further determined based on the generated atlas. A rough segmentation is obtained by a maximum a posteriori classification of probability map, and the final liver segmentation is produced by a shape-intensity prior level set in the most likely liver region. Our method is evaluated and demonstrated on 25 test CT datasets from our partner site, and its results are compared with two state-of-the-art liver segmentation methods. Moreover, our performance results on 10 MICCAI test datasets are submitted to the organizers for comparison with the other automatic algorithms. Using the 25 test CT datasets, average symmetric surface distance is [Formula: see text] mm (range 0.62-2.12 mm), root mean square symmetric surface distance error is [Formula: see text] mm (range 0.97-3.01 mm), and maximum symmetric surface distance error is [Formula: see text] mm (range 12.73-26.67 mm) by our method. Our method on 10 MICCAI test data sets ranks 10th in all the 47 automatic algorithms on the site as of July 2015. Quantitative results, as well as qualitative comparisons of segmentations, indicate that our method is a promising tool to improve the efficiency of both techniques. The applicability of the proposed method to some challenging clinical problems and the segmentation of the liver are demonstrated with good results on both quantitative and qualitative experimentations. This study suggests that the proposed framework can be good enough to replace the time-consuming and tedious slice-by-slice manual segmentation approach.
ASPECTS: an automation-assisted SPE method development system.
Li, Ming; Chou, Judy; King, Kristopher W; Yang, Liyu
2013-07-01
A typical conventional SPE method development (MD) process usually involves deciding the chemistry of the sorbent and eluent based on information about the analyte; experimentally preparing and trying out various combinations of adsorption chemistry and elution conditions; quantitatively evaluating the various conditions; and comparing quantitative results from all combination of conditions to select the best condition for method qualification. The second and fourth steps have mostly been performed manually until now. We developed an automation-assisted system that expedites the conventional SPE MD process by automating 99% of the second step, and expedites the fourth step by automatically processing the results data and presenting it to the analyst in a user-friendly format. The automation-assisted SPE MD system greatly saves the manual labor in SPE MD work, prevents analyst errors from causing misinterpretation of quantitative results, and shortens data analysis and interpretation time.
Validation of Computerized Automatic Calculation of the Sequential Organ Failure Assessment Score
Harrison, Andrew M.; Pickering, Brian W.; Herasevich, Vitaly
2013-01-01
Purpose. To validate the use of a computer program for the automatic calculation of the sequential organ failure assessment (SOFA) score, as compared to the gold standard of manual chart review. Materials and Methods. Adult admissions (age > 18 years) to the medical ICU with a length of stay greater than 24 hours were studied in the setting of an academic tertiary referral center. A retrospective cross-sectional analysis was performed using a derivation cohort to compare automatic calculation of the SOFA score to the gold standard of manual chart review. After critical appraisal of sources of disagreement, another analysis was performed using an independent validation cohort. Then, a prospective observational analysis was performed using an implementation of this computer program in AWARE Dashboard, which is an existing real-time patient EMR system for use in the ICU. Results. Good agreement between the manual and automatic SOFA calculations was observed for both the derivation (N=94) and validation (N=268) cohorts: 0.02 ± 2.33 and 0.29 ± 1.75 points, respectively. These results were validated in AWARE (N=60). Conclusion. This EMR-based automatic tool accurately calculates SOFA scores and can facilitate ICU decisions without the need for manual data collection. This tool can also be employed in a real-time electronic environment. PMID:23936639
Locating articular cartilage in MR images
NASA Astrophysics Data System (ADS)
Folkesson, Jenny; Dam, Erik; Pettersen, Paola; Olsen, Ole F.; Nielsen, Mads; Christiansen, Claus
2005-04-01
Accurate computation of the thickness of the articular cartilage is of great importance when diagnosing and monitoring the progress of joint diseases such as osteoarthritis. A fully automated cartilage assessment method is preferable compared to methods using manual interaction in order to avoid inter- and intra-observer variability. As a first step in the cartilage assessment, we present an automatic method for locating articular cartilage in knee MRI using supervised learning. The next step will be to fit a variable shape model to the cartilage, initiated at the location found using the method presented in this paper. From the model, disease markers will be extracted for the quantitative evaluation of the cartilage. The cartilage is located using an ANN-classifier, where every voxel is classified as cartilage or non-cartilage based on prior knowledge of the cartilage structure. The classifier is tested using leave-one-out-evaluation, and we found the average sensitivity and specificity to be 91.0% and 99.4%, respectively. The center of mass calculated from voxels classified as cartilage are similar to the corresponding values calculated from manual segmentations, which confirms that this method can find a good initial position for a shape model.
Song, Lei; Gao, Jungang; Wang, Sheng; Hu, Huasi; Guo, Youmin
2017-01-01
Estimation of the pleural effusion's volume is an important clinical issue. The existing methods cannot assess it accurately when there is large volume of liquid in the pleural cavity and/or the patient has some other disease (e.g. pneumonia). In order to help solve this issue, the objective of this study is to develop and test a novel algorithm using B-spline and local clustering level set method jointly, namely BLL. The BLL algorithm was applied to a dataset involving 27 pleural effusions detected on chest CT examination of 18 adult patients with the presence of free pleural effusion. Study results showed that average volumes of pleural effusion computed using the BLL algorithm and assessed manually by the physicians were 586 ml±339 ml and 604±352 ml, respectively. For the same patient, the volume of the pleural effusion, segmented semi-automatically, was 101.8% ±4.6% of that was segmented manually. Dice similarity was found to be 0.917±0.031. The study demonstrated feasibility of applying the new BLL algorithm to accurately measure the volume of pleural effusion.
Revisiting Gaussian Process Regression Modeling for Localization in Wireless Sensor Networks
Richter, Philipp; Toledano-Ayala, Manuel
2015-01-01
Signal strength-based positioning in wireless sensor networks is a key technology for seamless, ubiquitous localization, especially in areas where Global Navigation Satellite System (GNSS) signals propagate poorly. To enable wireless local area network (WLAN) location fingerprinting in larger areas while maintaining accuracy, methods to reduce the effort of radio map creation must be consolidated and automatized. Gaussian process regression has been applied to overcome this issue, also with auspicious results, but the fit of the model was never thoroughly assessed. Instead, most studies trained a readily available model, relying on the zero mean and squared exponential covariance function, without further scrutinization. This paper studies the Gaussian process regression model selection for WLAN fingerprinting in indoor and outdoor environments. We train several models for indoor/outdoor- and combined areas; we evaluate them quantitatively and compare them by means of adequate model measures, hence assessing the fit of these models directly. To illuminate the quality of the model fit, the residuals of the proposed model are investigated, as well. Comparative experiments on the positioning performance verify and conclude the model selection. In this way, we show that the standard model is not the most appropriate, discuss alternatives and present our best candidate. PMID:26370996
Gennaro, G; Ballaminut, A; Contento, G
2017-09-01
This study aims to illustrate a multiparametric automatic method for monitoring long-term reproducibility of digital mammography systems, and its application on a large scale. Twenty-five digital mammography systems employed within a regional screening programme were controlled weekly using the same type of phantom, whose images were analysed by an automatic software tool. To assess system reproducibility levels, 15 image quality indices (IQIs) were extracted and compared with the corresponding indices previously determined by a baseline procedure. The coefficients of variation (COVs) of the IQIs were used to assess the overall variability. A total of 2553 phantom images were collected from the 25 digital mammography systems from March 2013 to December 2014. Most of the systems showed excellent image quality reproducibility over the surveillance interval, with mean variability below 5%. Variability of each IQI was 5%, with the exception of one index associated with the smallest phantom objects (0.25 mm), which was below 10%. The method applied for reproducibility tests-multi-detail phantoms, cloud automatic software tool to measure multiple image quality indices and statistical process control-was proven to be effective and applicable on a large scale and to any type of digital mammography system. • Reproducibility of mammography image quality should be monitored by appropriate quality controls. • Use of automatic software tools allows image quality evaluation by multiple indices. • System reproducibility can be assessed comparing current index value with baseline data. • Overall system reproducibility of modern digital mammography systems is excellent. • The method proposed and applied is cost-effective and easily scalable.
Evaluation of automatic image quality assessment in chest CT - A human cadaver study.
Franck, Caro; De Crop, An; De Roo, Bieke; Smeets, Peter; Vergauwen, Merel; Dewaele, Tom; Van Borsel, Mathias; Achten, Eric; Van Hoof, Tom; Bacher, Klaus
2017-04-01
The evaluation of clinical image quality (IQ) is important to optimize CT protocols and to keep patient doses as low as reasonably achievable. Considering the significant amount of effort needed for human observer studies, automatic IQ tools are a promising alternative. The purpose of this study was to evaluate automatic IQ assessment in chest CT using Thiel embalmed cadavers. Chest CT's of Thiel embalmed cadavers were acquired at different exposures. Clinical IQ was determined by performing a visual grading analysis. Physical-technical IQ (noise, contrast-to-noise and contrast-detail) was assessed in a Catphan phantom. Soft and sharp reconstructions were made with filtered back projection and two strengths of iterative reconstruction. In addition to the classical IQ metrics, an automatic algorithm was used to calculate image quality scores (IQs). To be able to compare datasets reconstructed with different kernels, the IQs values were normalized. Good correlations were found between IQs and the measured physical-technical image quality: noise (ρ=-1.00), contrast-to-noise (ρ=1.00) and contrast-detail (ρ=0.96). The correlation coefficients between IQs and the observed clinical image quality of soft and sharp reconstructions were 0.88 and 0.93, respectively. The automatic scoring algorithm is a promising tool for the evaluation of thoracic CT scans in daily clinical practice. It allows monitoring of the image quality of a chest protocol over time, without human intervention. Different reconstruction kernels can be compared after normalization of the IQs. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Automatic analysis of the micronucleus test in primary human lymphocytes using image analysis.
Frieauff, W; Martus, H J; Suter, W; Elhajouji, A
2013-01-01
The in vitro micronucleus test (MNT) is a well-established test for early screening of new chemical entities in industrial toxicology. For assessing the clastogenic or aneugenic potential of a test compound, micronucleus induction in cells has been shown repeatedly to be a sensitive and a specific parameter. Various automated systems to replace the tedious and time-consuming visual slide analysis procedure as well as flow cytometric approaches have been discussed. The ROBIAS (Robotic Image Analysis System) for both automatic cytotoxicity assessment and micronucleus detection in human lymphocytes was developed at Novartis where the assay has been used to validate positive results obtained in the MNT in TK6 cells, which serves as the primary screening system for genotoxicity profiling in early drug development. In addition, the in vitro MNT has become an accepted alternative to support clinical studies and will be used for regulatory purposes as well. The comparison of visual with automatic analysis results showed a high degree of concordance for 25 independent experiments conducted for the profiling of 12 compounds. For concentration series of cyclophosphamide and carbendazim, a very good correlation between automatic and visual analysis by two examiners could be established, both for the relative division index used as cytotoxicity parameter, as well as for micronuclei scoring in mono- and binucleated cells. Generally, false-positive micronucleus decisions could be controlled by fast and simple relocation of the automatically detected patterns. The possibility to analyse 24 slides within 65h by automatic analysis over the weekend and the high reproducibility of the results make automatic image processing a powerful tool for the micronucleus analysis in primary human lymphocytes. The automated slide analysis for the MNT in human lymphocytes complements the portfolio of image analysis applications on ROBIAS which is supporting various assays at Novartis.
Haderlein, Tino; Döllinger, Michael; Matoušek, Václav; Nöth, Elmar
2016-10-01
Automatic voice assessment is often performed using sustained vowels. In contrast, speech analysis of read-out texts can be applied to voice and speech assessment. Automatic speech recognition and prosodic analysis were used to find regression formulae between automatic and perceptual assessment of four voice and four speech criteria. The regression was trained with 21 men and 62 women (average age 49.2 years) and tested with another set of 24 men and 49 women (48.3 years), all suffering from chronic hoarseness. They read the text 'Der Nordwind und die Sonne' ('The North Wind and the Sun'). Five voice and speech therapists evaluated the data on 5-point Likert scales. Ten prosodic and recognition accuracy measures (features) were identified which describe all the examined criteria. Inter-rater correlation within the expert group was between r = 0.63 for the criterion 'match of breath and sense units' and r = 0.87 for the overall voice quality. Human-machine correlation was between r = 0.40 for the match of breath and sense units and r = 0.82 for intelligibility. The perceptual ratings of different criteria were highly correlated with each other. Likewise, the feature sets modeling the criteria were very similar. The automatic method is suitable for assessing chronic hoarseness in general and for subgroups of functional and organic dysphonia. In its current version, it is almost as reliable as a randomly picked rater from a group of voice and speech therapists.
ERIC Educational Resources Information Center
Arendasy, Martin E.; Sommer, Markus
2010-01-01
In complex three-dimensional mental rotation tasks males have been reported to score up to one standard deviation higher than females. However, this effect size estimate could be compromised by the presence of gender bias at the item level, which calls the validity of purely quantitative performance comparisons into question. We hypothesized that…
NASA Astrophysics Data System (ADS)
Liu, Xin; Lu, Hongbing; Chen, Hanyong; Zhao, Li; Shi, Zhengxing; Liang, Zhengrong
2009-02-01
Developmental dysplasia of the hip is a congenital hip joint malformation affecting the proximal femurs and acetabulum that are subluxatable, dislocatable, and dislocated. Conventionally, physicians made diagnoses and treatments only based on findings from two-dimensional (2D) images by manually calculating clinic parameters. However, anatomical complexity of the disease and the limitation of current standard procedures make accurate diagnosis quite difficultly. In this study, we developed a system that provides quantitative measurement of 3D clinical indexes based on computed tomography (CT) images. To extract bone structure from surrounding tissues more accurately, the system firstly segments the bone using a knowledge-based fuzzy clustering method, which is formulated by modifying the objective function of the standard fuzzy c-means algorithm with additive adaptation penalty. The second part of the system calculates automatically the clinical indexes, which are extended from 2D to 3D for accurate description of spatial relationship between femurs and acetabulum. To evaluate the system performance, experimental study based on 22 patients with unilateral or bilateral affected hip was performed. The results of 3D acetabulum index (AI) automatically provided by the system were validated by comparison with 2D results measured by surgeons manually. The correlation between the two results was found to be 0.622 (p<0.01).
Automated Video Based Facial Expression Analysis of Neuropsychiatric Disorders
Wang, Peng; Barrett, Frederick; Martin, Elizabeth; Milanova, Marina; Gur, Raquel E.; Gur, Ruben C.; Kohler, Christian; Verma, Ragini
2008-01-01
Deficits in emotional expression are prominent in several neuropsychiatric disorders, including schizophrenia. Available clinical facial expression evaluations provide subjective and qualitative measurements, which are based on static 2D images that do not capture the temporal dynamics and subtleties of expression changes. Therefore, there is a need for automated, objective and quantitative measurements of facial expressions captured using videos. This paper presents a computational framework that creates probabilistic expression profiles for video data and can potentially help to automatically quantify emotional expression differences between patients with neuropsychiatric disorders and healthy controls. Our method automatically detects and tracks facial landmarks in videos, and then extracts geometric features to characterize facial expression changes. To analyze temporal facial expression changes, we employ probabilistic classifiers that analyze facial expressions in individual frames, and then propagate the probabilities throughout the video to capture the temporal characteristics of facial expressions. The applications of our method to healthy controls and case studies of patients with schizophrenia and Asperger’s syndrome demonstrate the capability of the video-based expression analysis method in capturing subtleties of facial expression. Such results can pave the way for a video based method for quantitative analysis of facial expressions in clinical research of disorders that cause affective deficits. PMID:18045693
Kim, Jinsuh; Leira, Enrique C; Callison, Richard C; Ludwig, Bryan; Moritani, Toshio; Magnotta, Vincent A; Madsen, Mark T
2010-05-01
We developed fully automated software for dynamic susceptibility contrast (DSC) MR perfusion-weighted imaging (PWI) to efficiently and reliably derive critical hemodynamic information for acute stroke treatment decisions. Brain MR PWI was performed in 80 consecutive patients with acute nonlacunar ischemic stroke within 24h after onset of symptom from January 2008 to August 2009. These studies were automatically processed to generate hemodynamic parameters that included cerebral blood flow and cerebral blood volume, and the mean transit time (MTT). To develop reliable software for PWI analysis, we used computationally robust algorithms including the piecewise continuous regression method to determine bolus arrival time (BAT), log-linear curve fitting, arrival time independent deconvolution method and sophisticated motion correction methods. An optimal arterial input function (AIF) search algorithm using a new artery-likelihood metric was also developed. Anatomical locations of the automatically determined AIF were reviewed and validated. The automatically computed BAT values were statistically compared with estimated BAT by a single observer. In addition, gamma-variate curve-fitting errors of AIF and inter-subject variability of AIFs were analyzed. Lastly, two observes independently assessed the quality and area of hypoperfusion mismatched with restricted diffusion area from motion corrected MTT maps and compared that with time-to-peak (TTP) maps using the standard approach. The AIF was identified within an arterial branch and enhanced areas of perfusion deficit were visualized in all evaluated cases. Total processing time was 10.9+/-2.5s (mean+/-s.d.) without motion correction and 267+/-80s (mean+/-s.d.) with motion correction on a standard personal computer. The MTT map produced with our software adequately estimated brain areas with perfusion deficit and was significantly less affected by random noise of the PWI when compared with the TTP map. Results of image quality assessment by two observers revealed that the MTT maps exhibited superior quality over the TTP maps (88% good rating of MTT as compared to 68% of TTP). Our software allowed fully automated deconvolution analysis of DSC PWI using proven efficient algorithms that can be applied to acute stroke treatment decisions. Our streamlined method also offers promise for further development of automated quantitative analysis of the ischemic penumbra. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.
Toward Dietary Assessment via Mobile Phone Video Cameras.
Chen, Nicholas; Lee, Yun Young; Rabb, Maurice; Schatz, Bruce
2010-11-13
Reliable dietary assessment is a challenging yet essential task for determining general health. Existing efforts are manual, require considerable effort, and are prone to underestimation and misrepresentation of food intake. We propose leveraging mobile phones to make this process faster, easier and automatic. Using mobile phones with built-in video cameras, individuals capture short videos of their meals; our software then automatically analyzes the videos to recognize dishes and estimate calories. Preliminary experiments on 20 typical dishes from a local cafeteria show promising results. Our approach complements existing dietary assessment methods to help individuals better manage their diet to prevent obesity and other diet-related diseases.
Larrabide, Ignacio; Cruz Villa-Uriol, Maria; Cárdenes, Rubén; Pozo, Jose Maria; Macho, Juan; San Roman, Luis; Blasco, Jordi; Vivas, Elio; Marzo, Alberto; Hose, D Rod; Frangi, Alejandro F
2011-05-01
Morphological descriptors are practical and essential biomarkers for diagnosis and treatment selection for intracranial aneurysm management according to the current guidelines in use. Nevertheless, relatively little work has been dedicated to improve the three-dimensional quantification of aneurysmal morphology, to automate the analysis, and hence to reduce the inherent intra and interobserver variability of manual analysis. In this paper we propose a methodology for the automated isolation and morphological quantification of saccular intracranial aneurysms based on a 3D representation of the vascular anatomy. This methodology is based on the analysis of the vasculature skeleton's topology and the subsequent application of concepts from deformable cylinders. These are expanded inside the parent vessel to identify different regions and discriminate the aneurysm sac from the parent vessel wall. The method renders as output the surface representation of the isolated aneurysm sac, which can then be quantified automatically. The proposed method provides the means for identifying the aneurysm neck in a deterministic way. The results obtained by the method were assessed in two ways: they were compared to manual measurements obtained by three independent clinicians as normally done during diagnosis and to automated measurements from manually isolated aneurysms by three independent operators, nonclinicians, experts in vascular image analysis. All the measurements were obtained using in-house tools. The results were qualitatively and quantitatively compared for a set of the saccular intracranial aneurysms (n = 26). Measurements performed on a synthetic phantom showed that the automated measurements obtained from manually isolated aneurysms where the most accurate. The differences between the measurements obtained by the clinicians and the manually isolated sacs were statistically significant (neck width: p <0.001, sac height: p = 0.002). When comparing clinicians' measurements to automatically isolated sacs, only the differences for the neck width were significant (neck width: p <0.001, sac height: p = 0.95). However, the correlation and agreement between the measurements obtained from manually and automatically isolated aneurysms for the neck width: p = 0.43 and sac height: p = 0.95 where found. The proposed method allows the automated isolation of intracranial aneurysms, eliminating the interobserver variability. In average, the computational cost of the automated method (2 min 36 s) was similar to the time required by a manual operator (measurement by clinicians: 2 min 51 s, manual isolation: 2 min 21 s) but eliminating human interaction. The automated measurements are irrespective of the viewing angle, eliminating any bias or difference between the observer criteria. Finally, the qualitative assessment of the results showed acceptable agreement between manually and automatically isolated aneurysms.
Automatic segmentation and supervised learning-based selection of nuclei in cancer tissue images.
Nandy, Kaustav; Gudla, Prabhakar R; Amundsen, Ryan; Meaburn, Karen J; Misteli, Tom; Lockett, Stephen J
2012-09-01
Analysis of preferential localization of certain genes within the cell nuclei is emerging as a new technique for the diagnosis of breast cancer. Quantitation requires accurate segmentation of 100-200 cell nuclei in each tissue section to draw a statistically significant result. Thus, for large-scale analysis, manual processing is too time consuming and subjective. Fortuitously, acquired images generally contain many more nuclei than are needed for analysis. Therefore, we developed an integrated workflow that selects, following automatic segmentation, a subpopulation of accurately delineated nuclei for positioning of fluorescence in situ hybridization-labeled genes of interest. Segmentation was performed by a multistage watershed-based algorithm and screening by an artificial neural network-based pattern recognition engine. The performance of the workflow was quantified in terms of the fraction of automatically selected nuclei that were visually confirmed as well segmented and by the boundary accuracy of the well-segmented nuclei relative to a 2D dynamic programming-based reference segmentation method. Application of the method was demonstrated for discriminating normal and cancerous breast tissue sections based on the differential positioning of the HES5 gene. Automatic results agreed with manual analysis in 11 out of 14 cancers, all four normal cases, and all five noncancerous breast disease cases, thus showing the accuracy and robustness of the proposed approach. Published 2012 Wiley Periodicals, Inc.
Urschler, Martin; Grassegger, Sabine; Štern, Darko
2015-01-01
Age estimation of individuals is important in human biology and has various medical and forensic applications. Recent interest in MR-based methods aims to investigate alternatives for established methods involving ionising radiation. Automatic, software-based methods additionally promise improved estimation objectivity. To investigate how informative automatically selected image features are regarding their ability to discriminate age, by exploring a recently proposed software-based age estimation method for MR images of the left hand and wrist. One hundred and two MR datasets of left hand images are used to evaluate age estimation performance, consisting of bone and epiphyseal gap volume localisation, computation of one age regression model per bone mapping image features to age and fusion of individual bone age predictions to a final age estimate. Quantitative results of the software-based method show an age estimation performance with a mean absolute difference of 0.85 years (SD = 0.58 years) to chronological age, as determined by a cross-validation experiment. Qualitatively, it is demonstrated how feature selection works and which image features of skeletal maturation are automatically chosen to model the non-linear regression function. Feasibility of automatic age estimation based on MRI data is shown and selected image features are found to be informative for describing anatomical changes during physical maturation in male adolescents.
The Eras and Trends of Automatic Short Answer Grading
ERIC Educational Resources Information Center
Burrows, Steven; Gurevych, Iryna; Stein, Benno
2015-01-01
Automatic short answer grading (ASAG) is the task of assessing short natural language responses to objective questions using computational methods. The active research in this field has increased enormously of late with over 80 papers fitting a definition of ASAG. However, the past efforts have generally been ad-hoc and non-comparable until…
9 Is Always on Top: Assessing the Automaticity of Synaesthetic Number-Forms
ERIC Educational Resources Information Center
Jarick, Michelle; Dixon, Michael J.; Smilek, Daniel
2011-01-01
For number-form synaesthetes, digits occupy idiosyncratic spatial locations. Atypical to the mental number line that extends horizontally, the synaesthete (L) experiences the numbers 1-10 vertically. We used a spatial cueing task to demonstrate that L's attention could be automatically directed to locations within her number-space--being faster to…
The Influence of Inattention on Rapid Automatized Naming and Reading Skills
ERIC Educational Resources Information Center
Pham, Andy V.
2010-01-01
The purpose of this study is to determine how behavioral symptoms of inattention predict rapid automatized naming (RAN) performance and reading skills in typically developing children. Participants included 104 third- and fourth-grade children from different elementary schools in mid-Michigan. RAN performance was assessed using the four Rapid…
A boosted optimal linear learner for retinal vessel segmentation
NASA Astrophysics Data System (ADS)
Poletti, E.; Grisan, E.
2014-03-01
Ocular fundus images provide important information about retinal degeneration, which may be related to acute pathologies or to early signs of systemic diseases. An automatic and quantitative assessment of vessel morphological features, such as diameters and tortuosity, can improve clinical diagnosis and evaluation of retinopathy. At variance with available methods, we propose a data-driven approach, in which the system learns a set of optimal discriminative convolution kernels (linear learner). The set is progressively built based on an ADA-boost sample weighting scheme, providing seamless integration between linear learner estimation and classification. In order to capture the vessel appearance changes at different scales, the kernels are estimated on a pyramidal decomposition of the training samples. The set is employed as a rotating bank of matched filters, whose response is used by the boosted linear classifier to provide a classification of each image pixel into the two classes of interest (vessel/background). We tested the approach fundus images available from the DRIVE dataset. We show that the segmentation performance yields an accuracy of 0.94.
Raina, A; Hennessy, R; Rains, M; Allred, J; Hirshburg, J M; Diven, D G; Markey, M K
2016-08-01
Traditional metrics for evaluating the severity of psoriasis are subjective, which complicates efforts to measure effective treatments in clinical trials. We collected images of psoriasis plaques and calibrated the coloration of the images according to an included color card. Features were extracted from the images and used to train a linear discriminant analysis classifier with cross-validation to automatically classify the degree of erythema. The results were tested against numerical scores obtained by a panel of dermatologists using a standard rating system. Quantitative measures of erythema based on the digital color images showed good agreement with subjective assessment of erythema severity (κ = 0.4203). The color calibration process improved the agreement from κ = 0.2364 to κ = 0.4203. We propose a method for the objective measurement of the psoriasis severity parameter of erythema and show that the calibration process improved the results. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Mechanisms of masked priming: a meta-analysis.
Van den Bussche, Eva; Van den Noortgate, Wim; Reynvoet, Bert
2009-05-01
The extent to which unconscious information can influence behavior has been a topic of considerable debate throughout the history of psychology. A frequently used method for studying subliminal processing is the masked priming paradigm. The authors focused on studies in which this paradigm was used. Their aim was twofold: first, to assess the magnitude of subliminal priming across the literature and to determine whether subliminal primes are processed semantically, and second, to examine potential moderators of priming effects. The authors found significant priming in their analyses, indicating that unconsciously presented information can influence behavior. Furthermore, priming was observed under circumstances in which a nonsemantic interpretation could not fully explain the effects, suggesting that subliminally presented information can be processed semantically. Nonetheless, the nonsemantic processing of primes is enhanced and priming effects are boosted when the experimental context allows the formation of automatic stimulus-response mappings. This quantitative review also revealed several moderators that influence the strength of priming. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
The psychology of martyrdom: making the ultimate sacrifice in the name of a cause.
Bélanger, Jocelyn J; Caouette, Julie; Sharvit, Keren; Dugas, Michelle
2014-09-01
Martyrdom is defined as the psychological readiness to suffer and sacrifice one's life for a cause. An integrative set of 8 studies investigated the concept of martyrdom by creating a new tool to quantitatively assess individuals' propensity toward self-sacrifice. Studies 1A-1C consisted of psychometric work attesting to the scale's unidimensionality, internal consistency, and temporal stability while examining its nomological network. Studies 2A-2B focused on the scale's predictive validity, especially as it relates to extreme behaviors and suicidal terrorism. Studies 3-5 focused on the influence of self-sacrifice on automatic decision making, costly and altruistic behaviors, and morality judgments. Results involving more than 2,900 participants from different populations, including a terrorist sample, supported the proposed conceptualization of martyrdom and demonstrated its importance for a vast repertoire of cognitive, emotional, and behavioral phenomena. Implications and future directions for the psychology of terrorism are discussed. 2014 APA, all rights reserved
Automated peroperative assessment of stents apposition from OCT pullbacks.
Dubuisson, Florian; Péry, Emilie; Ouchchane, Lemlih; Combaret, Nicolas; Kauffmann, Claude; Souteyrand, Géraud; Motreff, Pascal; Sarry, Laurent
2015-04-01
This study's aim was to control the stents apposition by automatically analyzing endovascular optical coherence tomography (OCT) sequences. Lumen is detected using threshold, morphological and gradient operators to run a Dijkstra algorithm. Wrong detection tagged by the user and caused by bifurcation, struts'presence, thrombotic lesions or dissections can be corrected using a morphing algorithm. Struts are also segmented by computing symmetrical and morphological operators. Euclidian distance between detected struts and wall artery initializes a stent's complete distance map and missing data are interpolated with thin-plate spline functions. Rejection of detected outliers, regularization of parameters by generalized cross-validation and using the one-side cyclic property of the map also optimize accuracy. Several indices computed from the map provide quantitative values of malapposition. Algorithm was run on four in-vivo OCT sequences including different incomplete stent apposition's cases. Comparison with manual expert measurements validates the segmentation׳s accuracy and shows an almost perfect concordance of automated results. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Smart Unconscious? Procedural Origins of Automatic Partner Attitudes in Marriage
Murray, Sandra L.; Holmes, John G.; Pinkus, Rebecca T.
2010-01-01
The paper examines potential origins of automatic (i.e., unconscious) attitudes toward one’s marital partner. It tests the hypothesis that early experiences in conflict-of-interest situations predict one’s later automatic inclination to approach (or avoid) the partner. A longitudinal study linked daily experiences in conflict-of-interest situations in the initial months of new marriages to automatic evaluations of the partner assessed four years later using the Implicit Associations Test. The results revealed that partners who were initially (1) treated less responsively and (2) evidenced more self-protective and less connectedness-promoting “if-then” contingencies in their thoughts and behavior later evidenced less positive automatic partner attitudes. However, these factors did not predict changes in love, satisfaction, or explicit beliefs about the partner. The findings hint at the existence of a “smart” relationship unconscious that captures behavioral realities conscious reflection can miss. PMID:20526450
Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Green, Vanessa A; Alberti, Gloria; Boccasini, Adele; Smaldone, Angela; Oliva, Doretta; Bosco, Andrea
2014-08-01
Assessing automatic feedback technologies to promote safe travel and speech loudness control in two men with multiple disabilities, respectively. The men were involved in two single-case studies. In Study I, the technology involved a microprocessor, two photocells, and a verbal feedback device. The man received verbal alerting/feedback when the photocells spotted an obstacle in front of him. In Study II, the technology involved a sound-detecting unit connected to a throat and an airborne microphone, and to a vibration device. Vibration occurred when the man's speech loudness exceeded a preset level. The man included in Study I succeeded in using the automatic feedback in substitution of caregivers' alerting/feedback for safe travel. The man of Study II used the automatic feedback to successfully reduce his speech loudness. Automatic feedback can be highly effective in helping persons with multiple disabilities improve their travel and speech performance.
Ozaki, Yu-ichi; Uda, Shinsuke; Saito, Takeshi H; Chung, Jaehoon; Kubota, Hiroyuki; Kuroda, Shinya
2010-04-01
Modeling of cellular functions on the basis of experimental observation is increasingly common in the field of cellular signaling. However, such modeling requires a large amount of quantitative data of signaling events with high spatio-temporal resolution. A novel technique which allows us to obtain such data is needed for systems biology of cellular signaling. We developed a fully automatable assay technique, termed quantitative image cytometry (QIC), which integrates a quantitative immunostaining technique and a high precision image-processing algorithm for cell identification. With the aid of an automated sample preparation system, this device can quantify protein expression, phosphorylation and localization with subcellular resolution at one-minute intervals. The signaling activities quantified by the assay system showed good correlation with, as well as comparable reproducibility to, western blot analysis. Taking advantage of the high spatio-temporal resolution, we investigated the signaling dynamics of the ERK pathway in PC12 cells. The QIC technique appears as a highly quantitative and versatile technique, which can be a convenient replacement for the most conventional techniques including western blot, flow cytometry and live cell imaging. Thus, the QIC technique can be a powerful tool for investigating the systems biology of cellular signaling.
Implementation of an interactive liver surgery planning system
NASA Astrophysics Data System (ADS)
Wang, Luyao; Liu, Jingjing; Yuan, Rong; Gu, Shuguo; Yu, Long; Li, Zhitao; Li, Yanzhao; Li, Zhen; Xie, Qingguo; Hu, Daoyu
2011-03-01
Liver tumor, one of the most wide-spread diseases, has a very high mortality in China. To improve success rates of liver surgeries and life qualities of such patients, we implement an interactive liver surgery planning system based on contrastenhanced liver CT images. The system consists of five modules: pre-processing, segmentation, modeling, quantitative analysis and surgery simulation. The Graph Cuts method is utilized to automatically segment the liver based on an anatomical prior knowledge that liver is the biggest organ and has almost homogeneous gray value. The system supports users to build patient-specific liver segment and sub-segment models using interactive portal vein branch labeling, and to perform anatomical resection simulation. It also provides several tools to simulate atypical resection, including resection plane, sphere and curved surface. To match actual surgery resections well and simulate the process flexibly, we extend our work to develop a virtual scalpel model and simulate the scalpel movement in the hepatic tissue using multi-plane continuous resection. In addition, the quantitative analysis module makes it possible to assess the risk of a liver surgery. The preliminary results show that the system has the potential to offer an accurate 3D delineation of the liver anatomy, as well as the tumors' location in relation to vessels, and to facilitate liver resection surgeries. Furthermore, we are testing the system in a full-scale clinical trial.
A novel ultrasonic method for measuring breast density and breast cancer risk
NASA Astrophysics Data System (ADS)
Glide-Hurst, Carri K.; Duric, Neb; Littrup, Peter J.
2008-03-01
Women with high mammographic breast density are at 4- to 6-fold increased risk of developing breast cancer compared to women with fatty breasts. However, current breast density estimations rely on mammography, which cannot provide accurate volumetric breast representation. Therefore, we explored two techniques of breast density evaluation via ultrasound tomography. A sample of 93 patients was imaged with our clinical prototype; each dataset contained 45-75 tomograms ranging from near the chest wall through the nipple. Whole breast acoustic velocity was determined by creating image stacks and evaluating the sound speed frequency distribution. Ultrasound percent density (USPD) was determined by segmenting high sound speed areas from each tomogram using k-means clustering, integrating over the entire breast, and dividing by total breast area. Both techniques were independently evaluated using two mammographic density measures: (1) qualitative, determined by a radiologist's visual assessment using BI-RADS Categories, and (2) quantitative, via semi-automatic segmentation to calculate mammographic percent density (MPD) for craniocaudal and medio-lateral oblique mammograms. ~140 m/s difference in acoustic velocity was observed between fatty and dense BI-RADS Categories. Increased sound speed was found with increased BI-RADS Category and quantitative MPD. Furthermore, strong positive associations between USPD, BI-RADS Category, and calculated MPD were observed. These results confirm that utilizing sound speed, both for whole-breast evaluation and segmenting locally, can be implemented to evaluate breast density.
NASA Astrophysics Data System (ADS)
Wei, David Wei; Deegan, Anthony J.; Wang, Ruikang K.
2017-06-01
When using optical coherence tomography angiography (OCTA), the development of artifacts due to involuntary movements can severely compromise the visualization and subsequent quantitation of tissue microvasculatures. To correct such an occurrence, we propose a motion compensation method to eliminate artifacts from human skin OCTA by means of step-by-step rigid affine registration, rigid subpixel registration, and nonrigid B-spline registration. To accommodate this remedial process, OCTA is conducted using two matching all-depth volume scans. Affine transformation is first performed on the large vessels of the deep reticular dermis, and then the resulting affine parameters are applied to all-depth vasculatures with a further subpixel registration to refine the alignment between superficial smaller vessels. Finally, the coregistration of both volumes is carried out to result in the final artifact-free composite image via an algorithm based upon cubic B-spline free-form deformation. We demonstrate that the proposed method can provide a considerable improvement to the final en face OCTA images with substantial artifact removal. In addition, the correlation coefficients and peak signal-to-noise ratios of the corrected images are evaluated and compared with those of the original images, further validating the effectiveness of the proposed method. We expect that the proposed method can be useful in improving qualitative and quantitative assessment of the OCTA images of scanned tissue beds.
Wei, David Wei; Deegan, Anthony J; Wang, Ruikang K
2017-06-01
When using optical coherence tomography angiography (OCTA), the development of artifacts due to involuntary movements can severely compromise the visualization and subsequent quantitation of tissue microvasculatures. To correct such an occurrence, we propose a motion compensation method to eliminate artifacts from human skin OCTA by means of step-by-step rigid affine registration, rigid subpixel registration, and nonrigid B-spline registration. To accommodate this remedial process, OCTA is conducted using two matching all-depth volume scans. Affine transformation is first performed on the large vessels of the deep reticular dermis, and then the resulting affine parameters are applied to all-depth vasculatures with a further subpixel registration to refine the alignment between superficial smaller vessels. Finally, the coregistration of both volumes is carried out to result in the final artifact-free composite image via an algorithm based upon cubic B-spline free-form deformation. We demonstrate that the proposed method can provide a considerable improvement to the final en face OCTA images with substantial artifact removal. In addition, the correlation coefficients and peak signal-to-noise ratios of the corrected images are evaluated and compared with those of the original images, further validating the effectiveness of the proposed method. We expect that the proposed method can be useful in improving qualitative and quantitative assessment of the OCTA images of scanned tissue beds.
van Harmelen, Anne-Laura; de Jong, Peter J; Glashouwer, Klaske A; Spinhoven, Philip; Penninx, Brenda W J H; Elzinga, Bernet M
2010-06-01
Individuals reporting Childhood Abuse (CA) (i.e., emotional neglect, emotional-, physical- and sexual-abuse) are marked by increased explicit (i.e. self-reported) negative self-associations, and an increased risk to develop depression or anxiety disorders. Automatic self-associations might play an important role in the development and maintenance of affective disorders after exposure to CA, since automatic associations are assumed to be involved in uncontrolled (spontaneous) affective behavior. This study examined whether individuals reporting a history of CA show stronger automatic (and explicit) self-depression and/or self-anxiety associations than individuals who report no CA in a large cohort study (Netherlands Study of Depression and Anxiety (NESDA), n = 2981). The Implicit Association Test (IAT) was utilized to assess automatic self-depression and self-anxiety associations. We found that CA was associated with enhanced automatic (and explicit) self-depression and self-anxiety associations. Additionally, when compared to physical- and sexual-abuse, Childhood Emotional Maltreatment (CEM; emotional abuse and emotional neglect) had the strongest link with enhanced automatic (and explicit) self-depression and self-anxiety associations. In addition, automatic and explicit negative self-associations partially mediated the association between CEM and depressive or anxious symptomatology. Implications regarding the importance of CA, and CEM in particular will be discussed. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Miró, Manuel; Jimoh, Modupe; Frenzel, Wolfgang
2005-05-01
In this paper, a novel concept is presented for automatic microsampling and continuous monitoring of metal ions in soils with minimum disturbance of the sampling site. It involves a hollow-fiber microdialyser that is implanted in the soil body as a miniaturized sensing device. The idea behind microdialysis in this application is to mimic the function of a passive sampler to predict the actual, rather than potential, mobility and bioavailability of metal traces. Although almost quantitative dialysis recoveries were obtained for lead (> or = 98%) from aqueous model solutions with sufficiently long capillaries (l > or = 30 mm, 200 microm i.d.) at perfusion rates of 2.0 microL min(-1), the resistance of an inert soil matrix was found to reduce metal uptake by 30%. Preliminary investigation of the potential of the microdialysis analyser for risk assessment of soil pollution, and for metal partitioning studies, were performed by implanting the dedicated probe in a laboratory-made soil column and hyphenating it with electrothermal atomic absorption spectrometry (ETAAS), so that minute, well-defined volumes of clean microdialysates were injected on-line into the graphite furnace. A noteworthy feature of the implanted microdialysis-based device is the capability to follow the kinetics of metal release under simulated natural scenarios or anthropogenic actions. An ancillary flow set-up was arranged in such a way that a continuous flow of leaching solution--mild extractant (10(-2) mol L(-1) CaCl2), acidic solution (10(-3) mol L(-1) HNO3), or chelating agent (10(-4) or 10(-2) mol L(-1) EDTA)--was maintained through the soil body, while the concentration trends of inorganic (un-bound) metal species at the soil-liquid interface could be monitored at near real-time. Hence, relevant qualitative and quantitative information about the various mobile fractions is obtained, and metal-soil phase associations can also be elucidated. Finally, stimulus-response schemes adapted from neurochemical applications and pharmacokinetic studies are to be extended to soil research as an alternative means of local monitoring of extraction processes after induction of a chemical change in the outer boundary of the permselective dialysis membrane.
Oghli, Mostafa Ghelich; Dehlaghi, Vahab; Zadeh, Ali Mohammad; Fallahi, Alireza; Pooyan, Mohammad
2014-07-01
Assessment of cardiac right-ventricle functions plays an essential role in diagnosis of arrhythmogenic right ventricular dysplasia (ARVD). Among clinical tests, cardiac magnetic resonance imaging (MRI) is now becoming the most valid imaging technique to diagnose ARVD. Fatty infiltration of the right ventricular free wall can be visible on cardiac MRI. Finding right-ventricle functional parameters from cardiac MRI images contains segmentation of right-ventricle in each slice of end diastole and end systole phases of cardiac cycle and calculation of end diastolic and end systolic volume and furthermore other functional parameters. The main problem of this task is the segmentation part. We used a robust method based on deformable model that uses shape information for segmentation of right-ventricle in short axis MRI images. After segmentation of right-ventricle from base to apex in end diastole and end systole phases of cardiac cycle, volume of right-ventricle in these phases calculated and then, ejection fraction calculated. We performed a quantitative evaluation of clinical cardiac parameters derived from the automatic segmentation by comparison against a manual delineation of the ventricles. The manually and automatically determined quantitative clinical parameters were statistically compared by means of linear regression. This fits a line to the data such that the root-mean-square error (RMSE) of the residuals is minimized. The results show low RMSE for Right Ventricle Ejection Fraction and Volume (≤ 0.06 for RV EF, and ≤ 10 mL for RV volume). Evaluation of segmentation results is also done by means of four statistical measures including sensitivity, specificity, similarity index and Jaccard index. The average value of similarity index is 86.87%. The Jaccard index mean value is 83.85% which shows a good accuracy of segmentation. The average of sensitivity is 93.9% and mean value of the specificity is 89.45%. These results show the reliability of proposed method in these cases that manual segmentation is inapplicable. Huge shape variety of right-ventricle led us to use a shape prior based method and this work can develop by four-dimensional processing for determining the first ventricular slices.
NASA Astrophysics Data System (ADS)
Keller, Brad M.; Gastounioti, Aimilia; Batiste, Rebecca C.; Kontos, Despina; Feldman, Michael D.
2016-03-01
Visual characterization of histologic specimens is known to suffer from intra- and inter-observer variability. To help address this, we developed an automated framework for characterizing digitized histology specimens based on a novel application of color histogram and color texture analysis. We perform a preliminary evaluation of this framework using a set of 73 trichrome-stained, digitized slides of normal breast tissue which were visually assessed by an expert pathologist in terms of the percentage of collagenous stroma, stromal collagen density, duct-lobular unit density and the presence of elastosis. For each slide, our algorithm automatically segments the tissue region based on the lightness channel in CIELAB colorspace. Within each tissue region, a color histogram feature vector is extracted using a common color palette for trichrome images generated with a previously described method. Then, using a whole-slide, lattice-based methodology, color texture maps are generated using a set of color co-occurrence matrix statistics: contrast, correlation, energy and homogeneity. The extracted features sets are compared to the visually assessed tissue characteristics. Overall, the extracted texture features have high correlations to both the percentage of collagenous stroma (r=0.95, p<0.001) and duct-lobular unit density (r=0.71, p<0.001) seen in the tissue samples, and several individual features were associated with either collagen density and/or the presence of elastosis (p<=0.05). This suggests that the proposed framework has promise as a means to quantitatively extract descriptors reflecting tissue-level characteristics and thus could be useful in detecting and characterizing histological processes in digitized histology specimens.
NASA Astrophysics Data System (ADS)
Jose, Abin; Haak, Daniel; Jonas, Stephan; Brandenburg, Vincent; Deserno, Thomas M.
2015-03-01
Photographic documentation and image-based wound assessment is frequently performed in medical diagnostics, patient care, and clinical research. To support quantitative assessment, photographic imaging is based on expensive and high-quality hardware and still needs appropriate registration and calibration. Using inexpensive consumer hardware such as smartphone-integrated cameras, calibration of geometry, color, and contrast is challenging. Some methods involve color calibration using a reference pattern such as a standard color card, which is located manually in the photographs. In this paper, we adopt the lattice detection algorithm by Park et al. from real world to medicine. At first, the algorithm extracts and clusters feature points according to their local intensity patterns. Groups of similar points are fed into a selection process, which tests for suitability as a lattice grid. The group which describes the largest probability of the meshes of a lattice is selected and from it a template for an initial lattice cell is extracted. Then, a Markov random field is modeled. Using the mean-shift belief propagation, the detection of the 2D lattice is solved iteratively as a spatial tracking problem. Least-squares geometric calibration of projective distortions and non-linear color calibration in RGB space is supported by 35 corner points of 24 color patches, respectively. The method is tested on 37 photographs taken from the German Calciphylaxis registry, where non-standardized photographic documentation is collected nationwide from all contributing trial sites. In all images, the reference card location is correctly identified. At least, 28 out of 35 lattice points were detected, outperforming the SIFT-based approach previously applied. Based on these coordinates, robust geometry and color registration is performed making the photographs comparable for quantitative analysis.
NASA Astrophysics Data System (ADS)
Lederman, Dror; Leader, Joseph K.; Zheng, Bin; Sciurba, Frank C.; Tan, Jun; Gur, David
2011-03-01
Quantitative computed tomography (CT) has been widely used to detect and evaluate the presence (or absence) of emphysema applying the density masks at specific thresholds, e.g., -910 or -950 Hounsfield Unit (HU). However, it has also been observed that subjects with similar density-mask based emphysema scores could have varying lung function, possibly indicating differences of disease severity. To assess this possible discrepancy, we investigated whether density distribution of "viable" lung parenchyma regions with pixel values > -910 HU correlates with lung function. A dataset of 38 subjects, who underwent both pulmonary function testing and CT examinations in a COPD SCCOR study, was assembled. After the lung regions depicted on CT images were automatically segmented by a computerized scheme, we systematically divided the lung parenchyma into different density groups (bins) and computed a number of statistical features (i.e., mean, standard deviation (STD), skewness of the pixel value distributions) in these density bins. We then analyzed the correlations between each feature and lung function. The correlation between diffusion lung capacity (DLCO) and STD of pixel values in the bin of -910HU <= PV < -750HU was -0.43, as compared with a correlation of -0.49 obtained between the post-bronchodilator ratio (FEV1/FVC) measured by the forced expiratory volume in 1 second (FEV1) dividing the forced vital capacity (FVC) and the STD of pixel values in the bin of -1024HU <= PV < -910HU. The results showed an association between the distribution of pixel values in "viable" lung parenchyma and lung function, which indicates that similar to the conventional density mask method, the pixel value distribution features in "viable" lung parenchyma areas may also provide clinically useful information to improve assessments of lung disease severity as measured by lung functional tests.
Quantitative evaluation of muscle synergy models: a single-trial task decoding approach
Delis, Ioannis; Berret, Bastien; Pozzo, Thierry; Panzeri, Stefano
2013-01-01
Muscle synergies, i.e., invariant coordinated activations of groups of muscles, have been proposed as building blocks that the central nervous system (CNS) uses to construct the patterns of muscle activity utilized for executing movements. Several efficient dimensionality reduction algorithms that extract putative synergies from electromyographic (EMG) signals have been developed. Typically, the quality of synergy decompositions is assessed by computing the Variance Accounted For (VAF). Yet, little is known about the extent to which the combination of those synergies encodes task-discriminating variations of muscle activity in individual trials. To address this question, here we conceive and develop a novel computational framework to evaluate muscle synergy decompositions in task space. Unlike previous methods considering the total variance of muscle patterns (VAF based metrics), our approach focuses on variance discriminating execution of different tasks. The procedure is based on single-trial task decoding from muscle synergy activation features. The task decoding based metric evaluates quantitatively the mapping between synergy recruitment and task identification and automatically determines the minimal number of synergies that captures all the task-discriminating variability in the synergy activations. In this paper, we first validate the method on plausibly simulated EMG datasets. We then show that it can be applied to different types of muscle synergy decomposition and illustrate its applicability to real data by using it for the analysis of EMG recordings during an arm pointing task. We find that time-varying and synchronous synergies with similar number of parameters are equally efficient in task decoding, suggesting that in this experimental paradigm they are equally valid representations of muscle synergies. Overall, these findings stress the effectiveness of the decoding metric in systematically assessing muscle synergy decompositions in task space. PMID:23471195
McCabe, David P; Roediger, Henry L; Karpicke, Jeffrey D
2011-04-01
Dual-process theories of retrieval suggest that controlled and automatic processing contribute to memory performance. Free recall tests are often considered pure measures of recollection, assessing only the controlled process. We report two experiments demonstrating that automatic processes also influence free recall. Experiment 1 used inclusion and exclusion tasks to estimate recollection and automaticity in free recall, adopting a new variant of the process dissociation procedure. Dividing attention during study selectively reduced the recollection estimate but did not affect the automatic component. In Experiment 2, we replicated the results of Experiment 1, and subjects additionally reported remember-know-guess judgments during recall in the inclusion condition. In the latter task, dividing attention during study reduced remember judgments for studied items, but know responses were unaffected. Results from both methods indicated that free recall is partly driven by automatic processes. Thus, we conclude that retrieval in free recall tests is not driven solely by conscious recollection (or remembering) but also by automatic influences of the same sort believed to drive priming on implicit memory tests. Sometimes items come to mind without volition in free recall.
Automatic speech recognition technology development at ITT Defense Communications Division
NASA Technical Reports Server (NTRS)
White, George M.
1977-01-01
An assessment of the applications of automatic speech recognition to defense communication systems is presented. Future research efforts include investigations into the following areas: (1) dynamic programming; (2) recognition of speech degraded by noise; (3) speaker independent recognition; (4) large vocabulary recognition; (5) word spotting and continuous speech recognition; and (6) isolated word recognition.
Why Is Rapid Automatized Naming Related to Reading?
ERIC Educational Resources Information Center
Georgiou, George K.; Parrila, Rauno; Cui, Ying; Papadopoulos, Timothy C.
2013-01-01
The objective of this study was to examine why rapid automatized naming (RAN) is related to reading by manipulating processes involved at the input, processing, and output stages of its production. In total, 65 children in Grade 2 and 65 in Grade 6 were assessed on serial and discrete RAN (Digits and Objects), Cancellation, RAN Yes/No, and oral…
Automatic Evaluation of Practices in Moodle for Self Learning in Engineering
ERIC Educational Resources Information Center
Sánchez, Carles; Ramos, Oriol; Márquez, Patricia; Marti, Enric; Rocarias, Jaume; Gil, Debora
2015-01-01
The first years in engineering degree courses are usually made of large groups with a low teacher-student ratio. Overcrowding in classrooms hinders continuous assessment much needed to promote independent learning. Therefore, there is a need to apply some kind of automatic evaluation to facilitate the correction of exercises outside the classroom.…
Using Automatic Speech Recognition Technology with Elicited Oral Response Testing
ERIC Educational Resources Information Center
Cox, Troy L.; Davies, Randall S.
2012-01-01
This study examined the use of automatic speech recognition (ASR) scored elicited oral response (EOR) tests to assess the speaking ability of English language learners. It also examined the relationship between ASR-scored EOR and other language proficiency measures and the ability of the ASR to rate speakers without bias to gender or native…
ERIC Educational Resources Information Center
Abu-Hamour, Bashir
2013-01-01
This study examined the acceptability, reliability, and validity of the Arabic translated version of the Rapid Automatized Naming and Rapid Alternating Stimulus Tests (RAN/RAS; Wolf & Denckla, 2005) for Jordanian students. RAN/RAS tests are a vital assessment tool to distinguish good readers from poor readers. These tests have been…
ERIC Educational Resources Information Center
Okurut, Jeje Moses
2018-01-01
The impact of automatic promotion practice on students dropping out of Uganda's primary education was assessed using propensity score in difference in differences analysis technique. The analysis strategy was instrumental in addressing the selection bias problem, as well as biases arising from common trends over time, and permanent latent…
Fromberger, Peter; Jordan, Kirsten; Steinkrauss, Henrike; von Herder, Jakob; Stolpmann, Georg; Kröner-Herwig, Birgit; Müller, Jürgen Leo
2013-05-01
Recent theories in sexuality highlight the importance of automatic and controlled attentional processes in viewing sexually relevant stimuli. The model of Spiering and Everaerd (2007) assumes that sexually relevant features of a stimulus are preattentively selected and automatically induce focal attention to these sexually relevant aspects. Whether this assumption proves true for pedophiles is unknown. It is aim of this study to test this assumption empirically for people suffering from pedophilic interests. Twenty-two pedophiles, 8 nonpedophilic forensic controls, and 52 healthy controls simultaneously viewed the picture of a child and the picture of an adult while eye movements were measured. Entry time was assessed as a measure of automatic attentional processes and relative fixation time in order to assess controlled attentional processes. Pedophiles demonstrated significantly shorter entry time to child stimuli than to adult stimuli. The opposite was the case for nonpedophiles, as they showed longer relative fixation time for adult stimuli, and, against all expectations, pedophiles also demonstrated longer relative fixation time for adult stimuli. The results confirmed the hypothesis that pedophiles automatically selected sexually relevant stimuli (children). Contrary to all expectations, this automatic selection did not trigger the focal attention to these sexually relevant pictures. Furthermore, pedophiles were first and longest attracted by faces and pubic regions of children; nonpedophiles were first and longest attracted by faces and breasts of adults. The results demonstrated, for the first time, that the face and pubic region are the most attracting regions in children for pedophiles. © 2013 American Psychological Association
Cordella, Claire; Dickerson, Bradford C.; Quimby, Megan; Yunusova, Yana; Green, Jordan R.
2016-01-01
Background Primary progressive aphasia (PPA) is a neurodegenerative aphasic syndrome with three distinct clinical variants: non-fluent (nfvPPA), logopenic (lvPPA), and semantic (svPPA). Speech (non-) fluency is a key diagnostic marker used to aid identification of the clinical variants, and researchers have been actively developing diagnostic tools to assess speech fluency. Current approaches reveal coarse differences in fluency between subgroups, but often fail to clearly differentiate nfvPPA from the variably fluent lvPPA. More robust subtype differentiation may be possible with finer-grained measures of fluency. Aims We sought to identify the quantitative measures of speech rate—including articulation rate and pausing measures—that best differentiated PPA subtypes, specifically the non-fluent group (nfvPPA) from the more fluent groups (lvPPA, svPPA). The diagnostic accuracy of the quantitative speech rate variables was compared to that of a speech fluency impairment rating made by clinicians. Methods and Procedures Automatic estimates of pause and speech segment durations and rate measures were derived from connected speech samples of participants with PPA (N=38; 11 nfvPPA, 14 lvPPA, 13 svPPA) and healthy age-matched controls (N=8). Clinician ratings of fluency impairment were made using a previously validated clinician rating scale developed specifically for use in PPA. Receiver operating characteristic (ROC) analyses enabled a quantification of diagnostic accuracy. Outcomes and Results Among the quantitative measures, articulation rate was the most effective for differentiating between nfvPPA and the more fluent lvPPA and svPPA groups. The diagnostic accuracy of both speech and articulation rate measures was markedly better than that of the clinician rating scale, and articulation rate was the best classifier overall. Area under the curve (AUC) values for articulation rate were good to excellent for identifying nfvPPA from both svPPA (AUC=.96) and lvPPA (AUC=.86). Cross-validation of accuracy results for articulation rate showed good generalizability outside the training dataset. Conclusions Results provide empirical support for (1) the efficacy of quantitative assessments of speech fluency and (2) a distinct non-fluent PPA subtype characterized, at least in part, by an underlying disturbance in speech motor control. The trend toward improved classifier performance for quantitative rate measures demonstrates the potential for a more accurate and reliable approach to subtyping in the fluency domain, and suggests that articulation rate may be a useful input variable as part of a multi-dimensional clinical subtyping approach. PMID:28757671
Zhu, Ying; Zhang, Yun-Xia; Liu, Wen-Wen; Ma, Yan; Fang, Qun; Yao, Bo
2015-04-01
This paper describes a nanoliter droplet array-based single-cell reverse transcription quantitative PCR (RT-qPCR) assay method for quantifying gene expression in individual cells. By sequentially printing nanoliter-scale droplets on microchip using a microfluidic robot, all liquid-handling operations including cell encapsulation, lysis, reverse transcription, and quantitative PCR with real-time fluorescence detection, can be automatically achieved. The inhibition effect of cell suspension buffer on RT-PCR assay was comprehensively studied to achieve high-sensitivity gene quantification. The present system was applied in the quantitative measurement of expression level of mir-122 in single Huh-7 cells. A wide distribution of mir-122 expression in single cells from 3061 copies/cell to 79998 copies/cell was observed, showing a high level of cell heterogeneity. With the advantages of full-automation in liquid-handling, simple system structure, and flexibility in achieving multi-step operations, the present method provides a novel liquid-handling mode for single cell gene expression analysis, and has significant potentials in transcriptional identification and rare cell analysis.
Zhu, Ying; Zhang, Yun-Xia; Liu, Wen-Wen; Ma, Yan; Fang, Qun; Yao, Bo
2015-01-01
This paper describes a nanoliter droplet array-based single-cell reverse transcription quantitative PCR (RT-qPCR) assay method for quantifying gene expression in individual cells. By sequentially printing nanoliter-scale droplets on microchip using a microfluidic robot, all liquid-handling operations including cell encapsulation, lysis, reverse transcription, and quantitative PCR with real-time fluorescence detection, can be automatically achieved. The inhibition effect of cell suspension buffer on RT-PCR assay was comprehensively studied to achieve high-sensitivity gene quantification. The present system was applied in the quantitative measurement of expression level of mir-122 in single Huh-7 cells. A wide distribution of mir-122 expression in single cells from 3061 copies/cell to 79998 copies/cell was observed, showing a high level of cell heterogeneity. With the advantages of full-automation in liquid-handling, simple system structure, and flexibility in achieving multi-step operations, the present method provides a novel liquid-handling mode for single cell gene expression analysis, and has significant potentials in transcriptional identification and rare cell analysis. PMID:25828383
1990-05-30
phase HPLC using an IBM Instruments Inc. model LC 9533 ternary liquid chromatograph attached to a model F9522 fixed UV module and a model F9523...acid analyses are done by separation and quantitation of phenylthiocarbamyl amino acid derivatives using a second IBM model LC 9533 ternary liquid...computer which controls the HPLC and an IBM Instruments Inc. model LC 9505 automatic sampler. The hemoglobin present in the effluent from large
JPRS Report, Science & Technology, Japan, 4th Intelligent Robots Symposium, Volume 2
1989-03-16
accidents caused by strikes by robots,5 a quantitative model for safety evaluation,6 and evaluations of actual systems7 in order to contribute to...Mobile Robot Position Referencing Using Map-Based Vision Systems.... 160 Safety Evaluation of Man-Robot System 171 Fuzzy Path Pattern of Automatic...camera are made after the robot stops to prevent damage from occurring through obstacle interference. The position of the camera is indicated on the
Measuring social interaction in music ensembles
D'Ausilio, Alessandro; Badino, Leonardo; Camurri, Antonio; Fadiga, Luciano
2016-01-01
Music ensembles are an ideal test-bed for quantitative analysis of social interaction. Music is an inherently social activity, and music ensembles offer a broad variety of scenarios which are particularly suitable for investigation. Small ensembles, such as string quartets, are deemed a significant example of self-managed teams, where all musicians contribute equally to a task. In bigger ensembles, such as orchestras, the relationship between a leader (the conductor) and a group of followers (the musicians) clearly emerges. This paper presents an overview of recent research on social interaction in music ensembles with a particular focus on (i) studies from cognitive neuroscience; and (ii) studies adopting a computational approach for carrying out automatic quantitative analysis of ensemble music performances. PMID:27069054
Measuring social interaction in music ensembles.
Volpe, Gualtiero; D'Ausilio, Alessandro; Badino, Leonardo; Camurri, Antonio; Fadiga, Luciano
2016-05-05
Music ensembles are an ideal test-bed for quantitative analysis of social interaction. Music is an inherently social activity, and music ensembles offer a broad variety of scenarios which are particularly suitable for investigation. Small ensembles, such as string quartets, are deemed a significant example of self-managed teams, where all musicians contribute equally to a task. In bigger ensembles, such as orchestras, the relationship between a leader (the conductor) and a group of followers (the musicians) clearly emerges. This paper presents an overview of recent research on social interaction in music ensembles with a particular focus on (i) studies from cognitive neuroscience; and (ii) studies adopting a computational approach for carrying out automatic quantitative analysis of ensemble music performances. © 2016 The Author(s).
Espresso coffee foam delays cooling of the liquid phase.
Arii, Yasuhiro; Nishizawa, Kaho
2017-04-01
Espresso coffee foam, called crema, is known to be a marker of the quality of espresso coffee extraction. However, the role of foam in coffee temperature has not been quantitatively clarified. In this study, we used an automatic machine for espresso coffee extraction. We evaluated whether the foam prepared using the machine was suitable for foam analysis. After extraction, the percentage and consistency of the foam were measured using various techniques, and changes in the foam volume were tracked over time. Our extraction method, therefore, allowed consistent preparation of high-quality foam. We also quantitatively determined that the foam phase slowed cooling of the liquid phase after extraction. High-quality foam plays an important role in delaying the cooling of espresso coffee.
NASA Technical Reports Server (NTRS)
Rey, P. A.; Gourinard, Y.; Cambou, F. (Principal Investigator); Guyader, J. C.; Gouaux, P.; Letoan, T.; Monchant, M.; Donville, B.; Loubet, D.
1973-01-01
The author has identified the following significant results. Significant results of the ARNICA program (February - December 1973) were: (1) The quantitative processing of ERTS-1 data was developed along two lines: the study of geological structures and lineaments of Spanish Catalonia, and the phytogeographical study of the forest region of the Landes of Gascony (France). In both cases it is shown that the ERTS-1 imagery can be used in establishing zonings of equal quantitative interpretation value. (2) In keeping with the operational transfer program proposed in previous reports between exploration of the imagery and charting of the object, a precise data processing method was developed, concerning more particularly the selection of digital equidensity samples computer display and rigorous referencing.
Dynamic Quantitative Trait Locus Analysis of Plant Phenomic Data.
Li, Zitong; Sillanpää, Mikko J
2015-12-01
Advanced platforms have recently become available for automatic and systematic quantification of plant growth and development. These new techniques can efficiently produce multiple measurements of phenotypes over time, and introduce time as an extra dimension to quantitative trait locus (QTL) studies. Functional mapping utilizes a class of statistical models for identifying QTLs associated with the growth characteristics of interest. A major benefit of functional mapping is that it integrates information over multiple timepoints, and therefore could increase the statistical power for QTL detection. We review the current development of computationally efficient functional mapping methods which provide invaluable tools for analyzing large-scale timecourse data that are readily available in our post-genome era. Copyright © 2015 Elsevier Ltd. All rights reserved.
The application of high-speed cinematography for the quantitative analysis of equine locomotion.
Fredricson, I; Drevemo, S; Dalin, G; Hjertën, G; Björne, K
1980-04-01
Locomotive disorders constitute a serious problem in horse racing which will only be rectified by a better understanding of the causative factors associated with disturbances of gait. This study describes a system for the quantitative analysis of the locomotion of horses at speed. The method is based on high-speed cinematography with a semi-automatic system of analysis of the films. The recordings are made with a 16 mm high-speed camera run at 500 frames per second (fps) and the films are analysed by special film-reading equipment and a mini-computer. The time and linear gait variables are presented in tabular form and the angles and trajectories of the joints and body segments are presented graphically.
Segmenting Bone Parts for Bone Age Assessment using Point Distribution Model and Contour Modelling
NASA Astrophysics Data System (ADS)
Kaur, Amandeep; Singh Mann, Kulwinder, Dr.
2018-01-01
Bone age assessment (BAA) is a task performed on radiographs by the pediatricians in hospitals to predict the final adult height, to diagnose growth disorders by monitoring skeletal development. For building an automatic bone age assessment system the step in routine is to do image pre-processing of the bone X-rays so that features row can be constructed. In this research paper, an enhanced point distribution algorithm using contours has been implemented for segmenting bone parts as per well-established procedure of bone age assessment that would be helpful in building feature row and later on; it would be helpful in construction of automatic bone age assessment system. Implementation of the segmentation algorithm shows high degree of accuracy in terms of recall and precision in segmenting bone parts from left hand X-Rays.
Automatic quality assessment of planetary images
NASA Astrophysics Data System (ADS)
Sidiropoulos, P.; Muller, J.-P.
2015-10-01
A significant fraction of planetary images are corrupted beyond the point that much scientific meaning can be extracted. For example, transmission errors result in missing data which is unrecoverable. The available planetary image datasets include many such "bad data", which both occupy valuable scientific storage resources and create false impressions about planetary image availability for specific planetary objects or target areas. In this work, we demonstrate a pipeline that we have developed to automatically assess the quality of planetary images. Additionally, this method discriminates between different types of image degradation, such as low-quality originating from camera flaws or low-quality triggered by atmospheric conditions, etc. Examples of quality assessment results for Viking Orbiter imagery will be also presented.
Engineering studies related to Skylab program. [assessment of automatic gain control data
NASA Technical Reports Server (NTRS)
Hayne, G. S.
1973-01-01
The relationship between the S-193 Automatic Gain Control data and the magnitude of received signal power was studied in order to characterize performance parameters for Skylab equipment. The r-factor was used for the assessment and is defined to be less than unity, and a function of off-nadir angle, ocean surface roughness, and receiver signal to noise ratio. A digital computer simulation was also used to assess to additive receiver, or white noise. The system model for the digital simulation is described, along with intermediate frequency and video impulse response functions used, details of the input waveforms, and results to date. Specific discussion of the digital computer programs used is also provided.
Peripheral Quantitative CT (pQCT) Using a Dedicated Extremity Cone-Beam CT Scanner
Muhit, A. A.; Arora, S.; Ogawa, M.; Ding, Y.; Zbijewski, W.; Stayman, J. W.; Thawait, G.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Bingham, C.O.; Means, K.; Carrino, J. A.; Siewerdsen, J. H.
2014-01-01
Purpose We describe the initial assessment of the peripheral quantitative CT (pQCT) imaging capabilities of a cone-beam CT (CBCT) scanner dedicated to musculoskeletal extremity imaging. The aim is to accurately measure and quantify bone and joint morphology using information automatically acquired with each CBCT scan, thereby reducing the need for a separate pQCT exam. Methods A prototype CBCT scanner providing isotropic, sub-millimeter spatial resolution and soft-tissue contrast resolution comparable or superior to standard multi-detector CT (MDCT) has been developed for extremity imaging, including the capability for weight-bearing exams and multi-mode (radiography, fluoroscopy, and volumetric) imaging. Assessment of pQCT performance included measurement of bone mineral density (BMD), morphometric parameters of subchondral bone architecture, and joint space analysis. Measurements employed phantoms, cadavers, and patients from an ongoing pilot study imaged with the CBCT prototype (at various acquisition, calibration, and reconstruction techniques) in comparison to MDCT (using pQCT protocols for analysis of BMD) and micro-CT (for analysis of subchondral morphometry). Results The CBCT extremity scanner yielded BMD measurement within ±2–3% error in both phantom studies and cadaver extremity specimens. Subchondral bone architecture (bone volume fraction, trabecular thickness, degree of anisotropy, and structure model index) exhibited good correlation with gold standard micro-CT (error ~5%), surpassing the conventional limitations of spatial resolution in clinical MDCT scanners. Joint space analysis demonstrated the potential for sensitive 3D joint space mapping beyond that of qualitative radiographic scores in application to non-weight-bearing versus weight-bearing lower extremities and assessment of phalangeal joint space integrity in the upper extremities. Conclusion The CBCT extremity scanner demonstrated promising initial results in accurate pQCT analysis from images acquired with each CBCT scan. Future studies will include improved x-ray scatter correction and image reconstruction techniques to further improve accuracy and to correlate pQCT metrics with known pathology. PMID:25076823
Shao, Yuhao; Yin, Xiaoxi; Kang, Dian; Shen, Boyu; Zhu, Zhangpei; Li, Xinuo; Li, Haofeng; Xie, Lin; Wang, Guangji; Liang, Yan
2017-08-01
Liquid chromatography mass spectrometry based methods provide powerful tools for protein analysis. Cytochromes P450 (CYPs), the most important drug metabolic enzymes, always exhibit sex-dependent expression patterns and metabolic activities. To date, analysis of CYPs based on mass spectrometry is still facing critical technical challenges due to the complexity and diversity of CYP isoforms besides lack of corresponding standards. The aim of present work consisted in developing a label-free qualitative and quantitative strategy for endogenous proteins, and then applying to the gender-difference study for CYPs in rat liver microsomes (RLMs). Initially, trypsin digested RLM specimens were analyzed by the nanoLC-LTQ-Orbitrap MS/MS. Skyline, an open source and freely available software for targeted proteomics research, was then used to screen the main CYP isoforms in RLMs under a series of criteria automatically, and a total of 40 and 39 CYP isoforms were identified in male and female RLMs, respectively. More importantly, a robust quantitative method in a tandem mass spectrometry-multiple reaction mode (MS/MS-MRM) was built and optimized under the help of Skyline, and successfully applied into the CYP gender difference study in RLMs. In this process, a simple and accurate approach named 'Standard Curve Slope" (SCS) was established based on the difference of standard curve slopes of CYPs between female and male RLMs in order to assess the gender difference of CYPs in RLMs. This presently developed methodology and approach could be widely used in the protein regulation study during drug pharmacological mechanism research. Copyright © 2017 Elsevier B.V. All rights reserved.
Richards, Lisa M.; Towle, Erica L.; Fox, Douglas J.; Dunn, Andrew K.
2014-01-01
Abstract. Although multiple intraoperative cerebral blood flow (CBF) monitoring techniques are currently available, a quantitative method that allows for continuous monitoring and that can be easily integrated into the surgical workflow is still needed. Laser speckle contrast imaging (LSCI) is an optical imaging technique with a high spatiotemporal resolution that has been recently demonstrated as feasible and effective for intraoperative monitoring of CBF during neurosurgical procedures. This study demonstrates the impact of retrospective motion correction on the quantitative analysis of intraoperatively acquired LSCI images. LSCI images were acquired through a surgical microscope during brain tumor resection procedures from 10 patients under baseline conditions and after a cortical stimulation in three of those patients. The patient’s electrocardiogram (ECG) was recorded during acquisition for postprocess correction of pulsatile artifacts. Automatic image registration was retrospectively performed to correct for tissue motion artifacts, and the performance of rigid and nonrigid transformations was compared. In baseline cases, the original images had 25%±27% noise across 16 regions of interest (ROIs). ECG filtering moderately reduced the noise to 20%±21%, while image registration resulted in a further noise reduction of 15%±4%. Combined ECG filtering and image registration significantly reduced the noise to 6.2%±2.6% (p<0.05). Using the combined motion correction, accuracy and sensitivity to small changes in CBF were improved in cortical stimulation cases. There was also excellent agreement between rigid and nonrigid registration methods (15/16 ROIs with <3% difference). Results from this study demonstrate the importance of motion correction for improved visualization of CBF changes in clinical LSCI images. PMID:26157974
Lin, Kun-Ju; Huang, Jia-Yann; Chen, Yung-Sheng
2011-12-01
Glomerular filtration rate (GFR) is a common accepted standard estimation of renal function. Gamma camera-based methods for estimating renal uptake of (99m)Tc-diethylenetriaminepentaacetic acid (DTPA) without blood or urine sampling have been widely used. Of these, the method introduced by Gates has been the most common method. Currently, most of gamma cameras are equipped with a commercial program for GFR determination, a semi-quantitative analysis by manually drawing region of interest (ROI) over each kidney. Then, the GFR value can be computed from the scintigraphic determination of (99m)Tc-DTPA uptake within the kidney automatically. Delineating the kidney area is difficult when applying a fixed threshold value. Moreover, hand-drawn ROIs are tedious, time consuming, and dependent highly on operator skill. Thus, we developed a fully automatic renal ROI estimation system based on the temporal changes in intensity counts, intensity-pair distribution image contrast enhancement method, adaptive thresholding, and morphological operations that can locate the kidney area and obtain the GFR value from a (99m)Tc-DTPA renogram. To evaluate the performance of the proposed approach, 30 clinical dynamic renograms were introduced. The fully automatic approach failed in one patient with very poor renal function. Four patients had a unilateral kidney, and the others had bilateral kidneys. The automatic contours from the remaining 54 kidneys were compared with the contours of manual drawing. The 54 kidneys were included for area error and boundary error analyses. There was high correlation between two physicians' manual contours and the contours obtained by our approach. For area error analysis, the mean true positive area overlap is 91%, the mean false negative is 13.4%, and the mean false positive is 9.3%. The boundary error is 1.6 pixels. The GFR calculated using this automatic computer-aided approach is reproducible and may be applied to help nuclear medicine physicians in clinical practice.
NASA Astrophysics Data System (ADS)
Chaisaowong, Kraisorn; Kraus, Thomas
2014-03-01
Pleural thickenings can be caused by asbestos exposure and may evolve into malignant pleural mesothelioma. While an early diagnosis plays the key role to an early treatment, and therefore helping to reduce morbidity, the growth rate of a pleural thickening can be in turn essential evidence to an early diagnosis of the pleural mesothelioma. The detection of pleural thickenings is today done by a visual inspection of CT data, which is time-consuming and underlies the physician's subjective judgment. Computer-assisted diagnosis systems to automatically assess pleural mesothelioma have been reported worldwide. But in this paper, an image analysis pipeline to automatically detect pleural thickenings and measure their volume is described. We first delineate automatically the pleural contour in the CT images. An adaptive surface-base smoothing technique is then applied to the pleural contours to identify all potential thickenings. A following tissue-specific topology-oriented detection based on a probabilistic Hounsfield Unit model of pleural plaques specify then the genuine pleural thickenings among them. The assessment of the detected pleural thickenings is based on the volumetry of the 3D model, created by mesh construction algorithm followed by Laplace-Beltrami eigenfunction expansion surface smoothing technique. Finally, the spatiotemporal matching of pleural thickenings from consecutive CT data is carried out based on the semi-automatic lung registration towards the assessment of its growth rate. With these methods, a new computer-assisted diagnosis system is presented in order to assure a precise and reproducible assessment of pleural thickenings towards the diagnosis of the pleural mesothelioma in its early stage.
NASA Astrophysics Data System (ADS)
Glazkov, Yury; Artjuchin, Yury; Astakhov, Alexander; Vas'kov, Alexander; Malyshev, Veniamin; Mitroshin, Edward; Glinsky, Valery; Moiseenko, Vasily; Makovlev, Vyacheslav
The development of aircraft-type reusable space vehicles (RSV) involves the problem of complete compatibility of automatic, director and manual control. Task decision is complicated, in particular, due to considerable quantitative and qualitative changes of vehicle dynamic characteristics, little stability margins (and even of unstability) of the RSV, and stringent requirements to control accuracy at some flight phases. Besides, during control a pilot is affected by g-loads which hamper motor activity and deteriorate its accuracy, alter the functional status of the visual analyser, and influence higher nervous activity. A study of g-load effects on the control efficiency, especially in manual and director modes, is of primary importance. The main tools for study of a rational selection of manual and director vehicle control systems and as an aid in formulating recommendations for optimum crew-automatic control system interactions are special complex and functional flight simulator test stands. The proposed simulator stand includes a powerful digital computer complex combined with the control system of the centrifuge. The interior of a pilot's vehicle cabin is imitated. A situation image system, pyscho-physical monitoring system, physician, centrifuge operator, and instructor stations are linked with the test stand.
Automatic vasculature identification in coronary angiograms by adaptive geometrical tracking.
Xiao, Ruoxiu; Yang, Jian; Goyal, Mahima; Liu, Yue; Wang, Yongtian
2013-01-01
As the uneven distribution of contrast agents and the perspective projection principle of X-ray, the vasculatures in angiographic image are with low contrast and are generally superposed with other organic tissues; therefore, it is very difficult to identify the vasculature and quantitatively estimate the blood flow directly from angiographic images. In this paper, we propose a fully automatic algorithm named adaptive geometrical vessel tracking (AGVT) for coronary artery identification in X-ray angiograms. Initially, the ridge enhancement (RE) image is obtained utilizing multiscale Hessian information. Then, automatic initialization procedures including seed points detection, and initial directions determination are performed on the RE image. The extracted ridge points can be adjusted to the geometrical centerline points adaptively through diameter estimation. Bifurcations are identified by discriminating connecting relationship of the tracked ridge points. Finally, all the tracked centerlines are merged and smoothed by classifying the connecting components on the vascular structures. Synthetic angiographic images and clinical angiograms are used to evaluate the performance of the proposed algorithm. The proposed algorithm is compared with other two vascular tracking techniques in terms of the efficiency and accuracy, which demonstrate successful applications of the proposed segmentation and extraction scheme in vasculature identification.
Bayır, Şafak
2016-01-01
With the advances in the computer field, methods and techniques in automatic image processing and analysis provide the opportunity to detect automatically the change and degeneration in retinal images. Localization of the optic disc is extremely important for determining the hard exudate lesions or neovascularization, which is the later phase of diabetic retinopathy, in computer aided eye disease diagnosis systems. Whereas optic disc detection is fairly an easy process in normal retinal images, detecting this region in the retinal image which is diabetic retinopathy disease may be difficult. Sometimes information related to optic disc and hard exudate information may be the same in terms of machine learning. We presented a novel approach for efficient and accurate localization of optic disc in retinal images having noise and other lesions. This approach is comprised of five main steps which are image processing, keypoint extraction, texture analysis, visual dictionary, and classifier techniques. We tested our proposed technique on 3 public datasets and obtained quantitative results. Experimental results show that an average optic disc detection accuracy of 94.38%, 95.00%, and 90.00% is achieved, respectively, on the following public datasets: DIARETDB1, DRIVE, and ROC. PMID:27110272
NASA Astrophysics Data System (ADS)
Soliz, P.; Davis, B.; Murray, V.; Pattichis, M.; Barriga, S.; Russell, S.
2010-03-01
This paper presents an image processing technique for automatically categorize age-related macular degeneration (AMD) phenotypes from retinal images. Ultimately, an automated approach will be much more precise and consistent in phenotyping of retinal diseases, such as AMD. We have applied the automated phenotyping to retina images from a cohort of mono- and dizygotic twins. The application of this technology will allow one to perform more quantitative studies that will lead to a better understanding of the genetic and environmental factors associated with diseases such as AMD. A method for classifying retinal images based on features derived from the application of amplitude-modulation frequency-modulation (AM-FM) methods is presented. Retinal images from identical and fraternal twins who presented with AMD were processed to determine whether AM-FM could be used to differentiate between the two types of twins. Results of the automatic classifier agreed with the findings of other researchers in explaining the variation of the disease between the related twins. AM-FM features classified 72% of the twins correctly. Visual grading found that genetics could explain between 46% and 71% of the variance.
Automated retinal vessel type classification in color fundus images
NASA Astrophysics Data System (ADS)
Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.
2013-02-01
Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.
Hashimoto, Shinichi; Ogihara, Hiroyuki; Suenaga, Masato; Fujita, Yusuke; Terai, Shuji; Hamamoto, Yoshihiko; Sakaida, Isao
2017-08-01
Visibility in capsule endoscopic images is presently evaluated through intermittent analysis of frames selected by a physician. It is thus subjective and not quantitative. A method to automatically quantify the visibility on capsule endoscopic images has not been reported. Generally, when designing automated image recognition programs, physicians must provide a training image; this process is called supervised learning. We aimed to develop a novel automated self-learning quantification system to identify visible areas on capsule endoscopic images. The technique was developed using 200 capsule endoscopic images retrospectively selected from each of three patients. The rate of detection of visible areas on capsule endoscopic images between a supervised learning program, using training images labeled by a physician, and our novel automated self-learning program, using unlabeled training images without intervention by a physician, was compared. The rate of detection of visible areas was equivalent for the supervised learning program and for our automatic self-learning program. The visible areas automatically identified by self-learning program correlated to the areas identified by an experienced physician. We developed a novel self-learning automated program to identify visible areas in capsule endoscopic images.
Automatic detection and severity measurement of eczema using image processing.
Alam, Md Nafiul; Munia, Tamanna Tabassum Khan; Tavakolian, Kouhyar; Vasefi, Fartash; MacKinnon, Nick; Fazel-Rezai, Reza
2016-08-01
Chronic skin diseases like eczema may lead to severe health and financial consequences for patients if not detected and controlled early. Early measurement of disease severity, combined with a recommendation for skin protection and use of appropriate medication can prevent the disease from worsening. Current diagnosis can be costly and time-consuming. In this paper, an automatic eczema detection and severity measurement model are presented using modern image processing and computer algorithm. The system can successfully detect regions of eczema and classify the identified region as mild or severe based on image color and texture feature. Then the model automatically measures skin parameters used in the most common assessment tool called "Eczema Area and Severity Index (EASI)," by computing eczema affected area score, eczema intensity score, and body region score of eczema allowing both patients and physicians to accurately assess the affected skin.
Ahlers, C; Simader, C; Geitzenauer, W; Stock, G; Stetson, P; Dastmalchi, S; Schmidt-Erfurth, U
2008-02-01
A limited number of scans compromise conventional optical coherence tomography (OCT) to track chorioretinal disease in its full extension. Failures in edge-detection algorithms falsify the results of retinal mapping even further. High-definition-OCT (HD-OCT) is based on raster scanning and was used to visualise the localisation and volume of intra- and sub-pigment-epithelial (RPE) changes in fibrovascular pigment epithelial detachments (fPED). Two different scanning patterns were evaluated. 22 eyes with fPED were imaged using a frequency-domain, high-speed prototype of the Cirrus HD-OCT. The axial resolution was 6 mum, and the scanning speed was 25 kA scans/s. Two different scanning patterns covering an area of 6 x 6 mm in the macular retina were compared. Three-dimensional topographic reconstructions and volume calculations were performed using MATLAB-based automatic segmentation software. Detailed information about layer-specific distribution of fluid accumulation and volumetric measurements can be obtained for retinal- and sub-RPE volumes. Both raster scans show a high correlation (p<0.01; R2>0.89) of measured values, that is PED volume/area, retinal volume and mean retinal thickness. Quality control of the automatic segmentation revealed reasonable results in over 90% of the examinations. Automatic segmentation allows for detailed quantitative and topographic analysis of the RPE and the overlying retina. In fPED, the 128 x 512 scanning-pattern shows mild advantages when compared with the 256 x 256 scan. Together with the ability for automatic segmentation, HD-OCT clearly improves the clinical monitoring of chorioretinal disease by adding relevant new parameters. HD-OCT is likely capable of enhancing the understanding of pathophysiology and benefits of treatment for current anti-CNV strategies in future.
Nara, Osamu
2011-01-24
I describe an interchangeable twin vessel (J, N) automatic glass recrystallizer that eliminates the time-consuming recovery and recycling of crystals for repeated recrystallization. The sample goes in the dissolution vessel J containing a magnetic stir-bar K; J is clamped to the upper joint H of recrystallizer body D. Empty crystallization vessel N is clamped to the lower joint M. Pure solvent is delivered to the dissolution vessel and the crystallization vessel via the head of the condenser A. Crystallization vessel is heated (P). The dissolution reservoir is stirred and heated by the solvent vapor (F). Continuous outflow of filtrate E out of J keeps N at a stable boiling temperature. This results in efficient dissolution, evaporation and separation of pure crystals Q. Pure solvent in the dissolution reservoir is recovered by suction. Empty dissolution and crystallization vessels are detached. Stirrer magnet is transferred to the crystallization vessel and the role of the vessels are then reversed. Evacuating mother liquor out of the upper twin vessel, the apparatus unit is ready for the next automatic recrystallization by refilling twin vessels with pure solvent. We show successive automatic recrystallization of acetaminophen from diethyl ether obtaining acetaminophen of higher melting temperatures than USP and JP reference standards by 8× automatic recrystallization, 96% yield at each stage. Also, I demonstrate a novel approach to the determination of absolute purity by combining the successive automatic recrystallization with differential scanning calorimetry (DSC) measurement requiring no reference standards. This involves the measurement of the criterial melting temperature T(0) corresponding to the 100% pure material and quantitative ΔT in DSC based on the van't Hoff law of melting point depression. The purity of six commercial acetaminophen samples and reference standards and an eight times recrystallized product evaluated were 98.8 mol%, 97.9 mol%, 99.1 mol%, 98.3 mol%, 98.4 mol%, 98.5 mol% and 99.3 mol% respectively. Copyright © 2010 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehmann, J; The University of Sydney, Sydney; The University of Newcastle, Newcastle
Purpose: As prior work has shown that current DIBH monitoring approaches using surrogate measures (marker block on chest) do not always correspond with the clinical quantity of interest (lung depth, LD), a software tool and workflow are introduced to use MV fluoroscopy during treatment for real-time / Live EPID-based Inspiration Level Assessment (LEILA). Methods: A prototype software tool calculates and displays the LD during the treatment of left sided breast cancer. Calculations are based on MV cine images which are acquired with the treatment beam thereby not incurring any additional imaging dose. Image capture and processing are implemented using amore » dedicated frame grabber computer. The calculation engine automatically detects image orientation and includes provisions for large treatment fields that exceed the size of the EPID panel. LD is measured along a line profile in the middle of the field. LEILA’s interface displays the current MV image, a reference image (DRR), the current LD, as well as a trace of LD over treatment time. The display includes patient specific LD tolerances. Tolerances are specified for each field and loaded before the treatment. A visual warning is generated when the tolerance is exceeded. LEILA is initially run in parallel with current DIBH techniques. When later run by itself DIBH setup will be done using skin marks and room laser. Results: Offline tests of LEILA confirmed accurate automatic LD measurement for a variety of patient geometries. Deployment of the EPID during all left sided breast treatments was well tolerated by patients and staff during a multi-month pilot. The frame grabber provides 11 frames-per-second; the MATLAB based LEILA prototype software can analyze five frames-per-second standalone on standard desktop hardware. Conclusion: LEILA provides an automated approach to quantitatively monitor LD on MV images during DIBH treatment. Future improvements include a database and further speed optimization.« less
Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET.
Hatt, M; Lamare, F; Boussion, N; Turzo, A; Collet, C; Salzenstein, F; Roux, C; Jarritt, P; Carson, K; Cheze-Le Rest, C; Visvikis, D
2007-06-21
Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm3 and 64 mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both simulated and acquired datasets led to similar results and conclusions as far as the performance of segmentation algorithms under evaluation is concerned.
Meteorological disaster management and assessment system design and implementation
NASA Astrophysics Data System (ADS)
Tang, Wei; Luo, Bin; Wu, Huanping
2009-09-01
Disaster prevention and mitigation get more and more attentions by Chinese government, with the national economic development in recent years. Some problems exhibit in traditional disaster management, such as the chaotic management of data, low level of information, poor data sharing. To improve the capability of information in disaster management, Meteorological Disaster Management and Assessment System (MDMAS) was developed and is introduced in the paper. MDMAS uses three-tier C/S architecture, including the application layer, data layer and service layer. Current functions of MDMAS include the typhoon and rainstorm assessment, disaster data query and statistics, automatic cartography for disaster management. The typhoon and rainstorm assessment models can be used in both pre-assessment of pre-disaster and post-disaster assessment. Implementation of automatic cartography uses ArcGIS Geoprocessing and ModelBuilder. In practice, MDMAS has been utilized to provide warning information, disaster assessment and services products. MDMAS is an efficient tool for meteorological disaster management and assessment. It can provide decision supports for disaster prevention and mitigation.
Meteorological disaster management and assessment system design and implementation
NASA Astrophysics Data System (ADS)
Tang, Wei; Luo, Bin; Wu, Huanping
2010-11-01
Disaster prevention and mitigation get more and more attentions by Chinese government, with the national economic development in recent years. Some problems exhibit in traditional disaster management, such as the chaotic management of data, low level of information, poor data sharing. To improve the capability of information in disaster management, Meteorological Disaster Management and Assessment System (MDMAS) was developed and is introduced in the paper. MDMAS uses three-tier C/S architecture, including the application layer, data layer and service layer. Current functions of MDMAS include the typhoon and rainstorm assessment, disaster data query and statistics, automatic cartography for disaster management. The typhoon and rainstorm assessment models can be used in both pre-assessment of pre-disaster and post-disaster assessment. Implementation of automatic cartography uses ArcGIS Geoprocessing and ModelBuilder. In practice, MDMAS has been utilized to provide warning information, disaster assessment and services products. MDMAS is an efficient tool for meteorological disaster management and assessment. It can provide decision supports for disaster prevention and mitigation.
Negoita, Madalina; Zolgharni, Massoud; Dadkho, Elham; Pernigo, Matteo; Mielewczik, Michael; Cole, Graham D; Dhutia, Niti M; Francis, Darrel P
2016-09-01
To determine the optimal frame rate at which reliable heart walls velocities can be assessed by speckle tracking. Assessing left ventricular function with speckle tracking is useful in patient diagnosis but requires a temporal resolution that can follow myocardial motion. In this study we investigated the effect of different frame rates on the accuracy of speckle tracking results, highlighting the temporal resolution where reliable results can be obtained. 27 patients were scanned at two different frame rates at their resting heart rate. From all acquired loops, lower temporal resolution image sequences were generated by dropping frames, decreasing the frame rate by up to 10-fold. Tissue velocities were estimated by automated speckle tracking. Above 40 frames/s the peak velocity was reliably measured. When frame rate was lower, the inter-frame interval containing the instant of highest velocity also contained lower velocities, and therefore the average velocity in that interval was an underestimate of the clinically desired instantaneous maximum velocity. The higher the frame rate, the more accurately maximum velocities are identified by speckle tracking, until the frame rate drops below 40 frames/s, beyond which there is little increase in peak velocity. We provide in an online supplement the vendor-independent software we used for automatic speckle-tracked velocity assessment to help others working in this field. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Description and recognition of faces from 3D data
NASA Astrophysics Data System (ADS)
Coombes, Anne M.; Richards, Robin; Linney, Alfred D.; Bruce, Vicki; Fright, Rick
1992-12-01
A method based on differential geometry, is presented for mathematically describing the shape of the facial surface. Three-dimensional data for the face are collected by optical surface scanning. The method allows the segmentation of the face into regions of a particular `surface type,' according to the surface curvature. Eight different surface types are produced which all have perceptually meaningful interpretations. The correspondence of the surface type regions to the facial features are easily visualized, allowing a qualitative assessment of the face. A quantitative description of the face in terms of the surface type regions can be produced and the variation of the description between faces is demonstrated. A set of optical surface scans can be registered together and averages to produce an average male and average female face. Thus an assessment of how individuals vary from the average can be made as well as a general statement about the differences between male and female faces. This method will enable an investigation to be made as to how reliably faces can be individuated by their surface shape which, if feasible, may be the basis of an automatic system for recognizing faces. It also has applications in physical anthropology, for classification of the face, facial reconstructive surgery, to quantify the changes in a face altered by reconstructive surgery and growth, and in visual perception, to assess the recognizability of faces. Examples of some of these applications are presented.
NASA Astrophysics Data System (ADS)
Boucher, Marc-Antoine; Watts, Nicolas; Gremillet, Frederic; Legare, Philippe; Kadoury, Samuel
2018-02-01
Pathologies like plantar fasciitis, a common soft tissue disorder of the foot, is frequently associated with older age, high BMI and little exercise. Like other pathologies associated with the foot, the knee or hip, foot orthoses can help the patient's posture and recent techniques allow the creation of personalized foot orthoses based on 3D foot model that are fitted with high accuracy to the foot surface. In order to assess the efficacy of the personalized orthoses on the patient's pose and balance, depth images with reflectance camera filters are acquired in order to evaluate the posture of the patient before and after the use of the orthoses. Images are analysed by clinicians to assess the region asymmetry and posture changes. However, this remains a subjective evaluation and a quantifiable measurement is required to follow patient progression. In this paper, we present a novel tool to assess and quantify the asymmetry of body regions using a color-based structural similarity metric calculated from paired regions. This provides a quantitative measure to evaluate the effect of the personalized orthoses on the patient. A user-friendly interface allows the user to select an area of the body and automatically generate a symmetry axis, along with a measure of asymmetry measuring reflectance variations from the skin. The tool was validated on 30 patients, demonstrating an 83% agreement rate compare to clinical observations.
Sazonovas, A; Japertas, P; Didziapetris, R
2010-01-01
This study presents a new type of acute toxicity (LD(50)) prediction that enables automated assessment of the reliability of predictions (which is synonymous with the assessment of the Model Applicability Domain as defined by the Organization for Economic Cooperation and Development). Analysis involved nearly 75,000 compounds from six animal systems (acute rat toxicity after oral and intraperitoneal administration; acute mouse toxicity after oral, intraperitoneal, intravenous, and subcutaneous administration). Fragmental Partial Least Squares (PLS) with 100 bootstraps yielded baseline predictions that were automatically corrected for non-linear effects in local chemical spaces--a combination called Global, Adjusted Locally According to Similarity (GALAS) modelling methodology. Each prediction obtained in this manner is provided with a reliability index value that depends on both compound's similarity to the training set (that accounts for similar trends in LD(50) variations within multiple bootstraps) and consistency of experimental results with regard to the baseline model in the local chemical environment. The actual performance of the Reliability Index (RI) was proven by its good (and uniform) correlations with Root Mean Square Error (RMSE) in all validation sets, thus providing quantitative assessment of the Model Applicability Domain. The obtained models can be used for compound screening in the early stages of drug development and prioritization for experimental in vitro testing or later in vivo animal acute toxicity studies.
Evaluation of background parenchymal enhancement on breast MRI: a systematic review
Signori, Alessio; Valdora, Francesca; Rossi, Federica; Calabrese, Massimo; Durando, Manuela; Mariscotto, Giovanna; Tagliafico, Alberto
2017-01-01
Objective: To perform a systematic review of the methods used for background parenchymal enhancement (BPE) evaluation on breast MRI. Methods: Studies dealing with BPE assessment on breast MRI were retrieved from major medical libraries independently by four reviewers up to 6 October 2015. The keywords used for database searching are “background parenchymal enhancement”, “parenchymal enhancement”, “MRI” and “breast”. The studies were included if qualitative and/or quantitative methods for BPE assessment were described. Results: Of the 420 studies identified, a total of 52 articles were included in the systematic review. 28 studies performed only a qualitative assessment of BPE, 13 studies performed only a quantitative assessment and 11 studies performed both qualitative and quantitative assessments. A wide heterogeneity was found in the MRI sequences and in the quantitative methods used for BPE assessment. Conclusion: A wide variability exists in the quantitative evaluation of BPE on breast MRI. More studies focused on a reliable and comparable method for quantitative BPE assessment are needed. Advances in knowledge: More studies focused on a quantitative BPE assessment are needed. PMID:27925480
Nimbi, Filippo Maria; Tripodi, Francesca; Simonelli, Chiara; Nobre, Pedro
2018-03-01
The Sexual Modes Questionnaire (SMQ) is a validated and widespread used tool to assess the association among negative automatic thoughts, emotions, and sexual response during sexual activity in men and women. To test the psychometric characteristics of the Italian version of the SMQ focusing on the Automatic Thoughts subscale (SMQ-AT). After linguistic translation, the psychometric properties (internal consistency, construct, and discriminant validity) were evaluated. 1,051 participants (425 men and 626 women, 776 healthy and 275 clinical groups complaining about sexual problems) participated in the present study. 2 confirmatory factor analyses were conducted to test the fit of the original factor structures of the SMQ versions. In addition, 2 principal component analyses were performed to highlight 2 new factorial structures that were further validated with confirmatory factor analyses. Cronbach α and composite reliability were used as internal consistency measures and differences between clinical and control groups were run to test the discriminant validity for the male and female versions. The associations with emotions and sexual functioning measures also are reported. Principal component analyses identified 5 factors in the male version: erection concerns thoughts, lack of erotic thoughts, age- and body-related thoughts, negative thoughts toward sex, and worries about partner's evaluation and failure anticipation thoughts. In the female version 6 factors were found: sexual abuse thoughts, lack of erotic thoughts, low self-body image thoughts, failure and disengagement thoughts, sexual passivity and control, and partner's lack of affection. Confirmatory factor analysis supported the adequacy of the factor structure for men and women. Moreover, the SMQ showed a strong association with emotional response and sexual functioning, differentiating between clinical and control groups. This measure is useful to evaluate patients and design interventions focused on negative automatic thoughts during sexual activity and to develop multicultural research. This study reports on the translation and validation of the Italian version of a clinically useful and widely used measure (assessing automatic thoughts during sexual activity). Limits regarding sampling technique and use of the Automatic Thoughts subscale are discussed in the article. The present findings support the validity and the internal consistency of the Italian version of the SMQ-AT and allow the assessment of negative automatic thoughts during sexual activity for clinical and research purposes. Nimbi FM, Tripodi F, Simonelli C, Nobre P. Sexual Modes Questionnaire (SMQ): Translation and Psychometric Properties of the Italian Version of the Automatic Thought Scale. J Sex Med 2018;15:396-409. Copyright © 2018 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Fontan, Lionel; Ferrané, Isabelle; Farinas, Jérôme; Pinquier, Julien; Tardieu, Julien; Magnen, Cynthia; Gaillard, Pascal; Aumont, Xavier; Füllgrabe, Christian
2017-01-01
Purpose: The purpose of this article is to assess speech processing for listeners with simulated age-related hearing loss (ARHL) and to investigate whether the observed performance can be replicated using an automatic speech recognition (ASR) system. The long-term goal of this research is to develop a system that will assist…
Assessing Children's Home Language Environments Using Automatic Speech Recognition Technology
ERIC Educational Resources Information Center
Greenwood, Charles R.; Thiemann-Bourque, Kathy; Walker, Dale; Buzhardt, Jay; Gilkerson, Jill
2011-01-01
The purpose of this research was to replicate and extend some of the findings of Hart and Risley using automatic speech processing instead of human transcription of language samples. The long-term goal of this work is to make the current approach to speech processing possible by researchers and clinicians working on a daily basis with families and…
ERIC Educational Resources Information Center
Reynolds, Michael; Besner, Derek
2006-01-01
The present experiments tested the claim that phonological recoding occurs "automatically" by assessing whether it uses central attention in the context of the psychological refractory period paradigm. Task 1 was a tone discrimination task and Task 2 was reading aloud. The joint effects of long-lag word repetition priming and stimulus onset…
ERIC Educational Resources Information Center
Shih, Ching-Hsiang
2011-01-01
This study combines multi-mice technology (people with disabilities can use standard mice, instead of specialized alternative computer input devices, to achieve complete mouse operation) with an assistive pointing function (i.e. cursor-capturing, which enables the user to move the cursor to the target center automatically), to assess whether two…
From Memory to Attitude: The Neurocognitive Process beyond Euthanasia Acceptance.
Enke, Martin; Meyer, Patric; Flor, Herta
2016-01-01
Numerous questionnaire studies on attitudes towards euthanasia produced conflicting results, precluding any general conclusion. This might be due to the fact that human behavior can be influenced by automatically triggered attitudes, which represent ingrained associations in memory and cannot be assessed by standard questionnaires, but require indirect measures such as reaction times (RT) or electroencephalographic recording (EEG). Event related potentials (ERPs) of the EEG and RT during an affective priming task were assessed to investigate the impact of automatically triggered attitudes and were compared to results of an explicit questionnaire. Explicit attitudes were ambivalent. Reaction time data showed neither positive nor negative associations towards euthanasia. ERP analyses revealed an N400 priming effect with lower mean amplitudes when euthanasia was associated with negative words. The euthanasia-related modulation of the N400 component shows an integration of the euthanasia object in negatively valenced associative neural networks. The integration of all measures suggests a bottom-up process of attitude activation, where automatically triggered negative euthanasia-relevant associations can become more ambiguous with increasing time in order to regulate the bias arising from automatic processes. These data suggest that implicit measures may make an important contribution to the understanding of euthanasia-related attitudes.
From Memory to Attitude: The Neurocognitive Process beyond Euthanasia Acceptance
Enke, Martin; Meyer, Patric; Flor, Herta
2016-01-01
Numerous questionnaire studies on attitudes towards euthanasia produced conflicting results, precluding any general conclusion. This might be due to the fact that human behavior can be influenced by automatically triggered attitudes, which represent ingrained associations in memory and cannot be assessed by standard questionnaires, but require indirect measures such as reaction times (RT) or electroencephalographic recording (EEG). Event related potentials (ERPs) of the EEG and RT during an affective priming task were assessed to investigate the impact of automatically triggered attitudes and were compared to results of an explicit questionnaire. Explicit attitudes were ambivalent. Reaction time data showed neither positive nor negative associations towards euthanasia. ERP analyses revealed an N400 priming effect with lower mean amplitudes when euthanasia was associated with negative words. The euthanasia-related modulation of the N400 component shows an integration of the euthanasia object in negatively valenced associative neural networks. The integration of all measures suggests a bottom-up process of attitude activation, where automatically triggered negative euthanasia-relevant associations can become more ambiguous with increasing time in order to regulate the bias arising from automatic processes. These data suggest that implicit measures may make an important contribution to the understanding of euthanasia-related attitudes. PMID:27088244
Fully automated segmentation of the pectoralis muscle boundary in breast MR images
NASA Astrophysics Data System (ADS)
Wang, Lei; Filippatos, Konstantinos; Friman, Ola; Hahn, Horst K.
2011-03-01
Dynamic Contrast Enhanced MRI (DCE-MRI) of the breast is emerging as a novel tool for early tumor detection and diagnosis. The segmentation of the structures in breast DCE-MR images, such as the nipple, the breast-air boundary and the pectoralis muscle, serves as a fundamental step for further computer assisted diagnosis (CAD) applications, e.g. breast density analysis. Moreover, the previous clinical studies show that the distance between the posterior breast lesions and the pectoralis muscle can be used to assess the extent of the disease. To enable automatic quantification of the distance from a breast tumor to the pectoralis muscle, a precise delineation of the pectoralis muscle boundary is required. We present a fully automatic segmentation method based on the second derivative information represented by the Hessian matrix. The voxels proximal to the pectoralis muscle boundary exhibit roughly the same Eigen value patterns as a sheet-like object in 3D, which can be enhanced and segmented by a Hessian-based sheetness filter. A vector-based connected component filter is then utilized such that only the pectoralis muscle is preserved by extracting the largest connected component. The proposed method was evaluated quantitatively with a test data set which includes 30 breast MR images by measuring the average distances between the segmented boundary and the annotated surfaces in two ground truth sets, and the statistics showed that the mean distance was 1.434 mm with the standard deviation of 0.4661 mm, which shows great potential for integration of the approach in the clinical routine.
A workflow for the automatic segmentation of organelles in electron microscopy image stacks
Perez, Alex J.; Seyedhosseini, Mojtaba; Deerinck, Thomas J.; Bushong, Eric A.; Panda, Satchidananda; Tasdizen, Tolga; Ellisman, Mark H.
2014-01-01
Electron microscopy (EM) facilitates analysis of the form, distribution, and functional status of key organelle systems in various pathological processes, including those associated with neurodegenerative disease. Such EM data often provide important new insights into the underlying disease mechanisms. The development of more accurate and efficient methods to quantify changes in subcellular microanatomy has already proven key to understanding the pathogenesis of Parkinson's and Alzheimer's diseases, as well as glaucoma. While our ability to acquire large volumes of 3D EM data is progressing rapidly, more advanced analysis tools are needed to assist in measuring precise three-dimensional morphologies of organelles within data sets that can include hundreds to thousands of whole cells. Although new imaging instrument throughputs can exceed teravoxels of data per day, image segmentation and analysis remain significant bottlenecks to achieving quantitative descriptions of whole cell structural organellomes. Here, we present a novel method for the automatic segmentation of organelles in 3D EM image stacks. Segmentations are generated using only 2D image information, making the method suitable for anisotropic imaging techniques such as serial block-face scanning electron microscopy (SBEM). Additionally, no assumptions about 3D organelle morphology are made, ensuring the method can be easily expanded to any number of structurally and functionally diverse organelles. Following the presentation of our algorithm, we validate its performance by assessing the segmentation accuracy of different organelle targets in an example SBEM dataset and demonstrate that it can be efficiently parallelized on supercomputing resources, resulting in a dramatic reduction in runtime. PMID:25426032
Targeting of deep-brain structures in nonhuman primates using MR and CT Images
NASA Astrophysics Data System (ADS)
Chen, Antong; Hines, Catherine; Dogdas, Belma; Bone, Ashleigh; Lodge, Kenneth; O'Malley, Stacey; Connolly, Brett; Winkelmann, Christopher T.; Bagchi, Ansuman; Lubbers, Laura S.; Uslaner, Jason M.; Johnson, Colena; Renger, John; Zariwala, Hatim A.
2015-03-01
In vivo gene delivery in central nervous systems of nonhuman primates (NHP) is an important approach for gene therapy and animal model development of human disease. To achieve a more accurate delivery of genetic probes, precise stereotactic targeting of brain structures is required. However, even with assistance from multi-modality 3D imaging techniques (e.g. MR and CT), the precision of targeting is often challenging due to difficulties in identification of deep brain structures, e.g. the striatum which consists of multiple substructures, and the nucleus basalis of meynert (NBM), which often lack clear boundaries to supporting anatomical landmarks. Here we demonstrate a 3D-image-based intracranial stereotactic approach applied toward reproducible intracranial targeting of bilateral NBM and striatum of rhesus. For the targeting we discuss the feasibility of an atlas-based automatic approach. Delineated originally on a high resolution 3D histology-MR atlas set, the NBM and the striatum could be located on the MR image of a rhesus subject through affine and nonrigid registrations. The atlas-based targeting of NBM was compared with the targeting conducted manually by an experienced neuroscientist. Based on the targeting, the trajectories and entry points for delivering the genetic probes to the targets could be established on the CT images of the subject after rigid registration. The accuracy of the targeting was assessed quantitatively by comparison between NBM locations obtained automatically and manually, and finally demonstrated qualitatively via post mortem analysis of slices that had been labelled via Evan Blue infusion and immunohistochemistry.
SU-E-I-24: Method for CT Automatic Exposure Control Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gracia, M; Olasolo, J; Martin, M
Purpose: Design of a phantom and a simple method for the automatic exposure control (AEC) verification in CT. This verification is included in the computed tomography (CT) Spanish Quality Assurance Protocol. Methods: The phantom design is made from the head and the body phantom used for the CTDI measurement and PMMA plates (35×35 cm2) of 10 cm thickness. Thereby, three different thicknesses along the longitudinal axis are obtained which permit to evaluate the longitudinal AEC performance. Otherwise, the existent asymmetry in the PMMA layers helps to assess angular and 3D AEC operation.Recent acquisition in our hospital (August 2014) of Nomexmore » electrometer (PTW), together with the 10 cm pencil ionization chamber, led to register dose rate as a function of time. Measurements with this chamber fixed at 0° and 90° on the gantry where made on five multidetector-CTs from principal manufacturers. Results: Individual analysis of measurements shows dose rate variation as a function of phantom thickness. The comparative analysis shows that dose rate is kept constant in the head and neck phantom while the PMMA phantom exhibits an abrupt variation between both results, being greater results at 90° as the thickness of the phantom is 3.5 times larger than in the perpendicular direction. Conclusion: Proposed method is simple, quick and reproducible. Results obtained let a qualitative evaluation of the AEC and they are consistent with the expected behavior. A line of future development is to quantitatively study the intensity modulation and parameters of image quality, and a possible comparative study between different manufacturers.« less
Schlaeger, Sarah; Freitag, Friedemann; Klupp, Elisabeth; Dieckmeyer, Michael; Weidlich, Dominik; Inhuber, Stephanie; Deschauer, Marcus; Schoser, Benedikt; Bublitz, Sarah; Montagnese, Federica; Zimmer, Claus; Rummeny, Ernst J; Karampinos, Dimitrios C; Kirschke, Jan S; Baum, Thomas
2018-01-01
Magnetic resonance imaging (MRI) can non-invasively assess muscle anatomy, exercise effects and pathologies with different underlying causes such as neuromuscular diseases (NMD). Quantitative MRI including fat fraction mapping using chemical shift encoding-based water-fat MRI has emerged for reliable determination of muscle volume and fat composition. The data analysis of water-fat images requires segmentation of the different muscles which has been mainly performed manually in the past and is a very time consuming process, currently limiting the clinical applicability. An automatization of the segmentation process would lead to a more time-efficient analysis. In the present work, the manually segmented thigh magnetic resonance imaging database MyoSegmenTUM is presented. It hosts water-fat MR images of both thighs of 15 healthy subjects and 4 patients with NMD with a voxel size of 3.2x2x4 mm3 with the corresponding segmentation masks for four functional muscle groups: quadriceps femoris, sartorius, gracilis, hamstrings. The database is freely accessible online at https://osf.io/svwa7/?view_only=c2c980c17b3a40fca35d088a3cdd83e2. The database is mainly meant as ground truth which can be used as training and test dataset for automatic muscle segmentation algorithms. The segmentation allows extraction of muscle cross sectional area (CSA) and volume. Proton density fat fraction (PDFF) of the defined muscle groups from the corresponding images and quadriceps muscle strength measurements/neurological muscle strength rating can be used for benchmarking purposes.
Investigating the Relationship between Stable Personality Characteristics and Automatic Imitation
Butler, Emily E.; Ward, Robert; Ramsey, Richard
2015-01-01
Automatic imitation is a cornerstone of nonverbal communication that fosters rapport between interaction partners. Recent research has suggested that stable dimensions of personality are antecedents to automatic imitation, but the empirical evidence linking imitation with personality traits is restricted to a few studies with modest sample sizes. Additionally, atypical imitation has been documented in autism spectrum disorders and schizophrenia, but the mechanisms underpinning these behavioural profiles remain unclear. Using a larger sample than prior studies (N=243), the current study tested whether performance on a computer-based automatic imitation task could be predicted by personality traits associated with social behaviour (extraversion and agreeableness) and with disorders of social cognition (autistic-like and schizotypal traits). Further personality traits (narcissism and empathy) were assessed in a subsample of participants (N=57). Multiple regression analyses showed that personality measures did not predict automatic imitation. In addition, using a similar analytical approach to prior studies, no differences in imitation performance emerged when only the highest and lowest 20 participants on each trait variable were compared. These data weaken support for the view that stable personality traits are antecedents to automatic imitation and that neural mechanisms thought to support automatic imitation, such as the mirror neuron system, are dysfunctional in autism spectrum disorders or schizophrenia. In sum, the impact that personality variables have on automatic imitation is less universal than initial reports suggest. PMID:26079137
Investigating the Relationship between Stable Personality Characteristics and Automatic Imitation.
Butler, Emily E; Ward, Robert; Ramsey, Richard
2015-01-01
Automatic imitation is a cornerstone of nonverbal communication that fosters rapport between interaction partners. Recent research has suggested that stable dimensions of personality are antecedents to automatic imitation, but the empirical evidence linking imitation with personality traits is restricted to a few studies with modest sample sizes. Additionally, atypical imitation has been documented in autism spectrum disorders and schizophrenia, but the mechanisms underpinning these behavioural profiles remain unclear. Using a larger sample than prior studies (N=243), the current study tested whether performance on a computer-based automatic imitation task could be predicted by personality traits associated with social behaviour (extraversion and agreeableness) and with disorders of social cognition (autistic-like and schizotypal traits). Further personality traits (narcissism and empathy) were assessed in a subsample of participants (N=57). Multiple regression analyses showed that personality measures did not predict automatic imitation. In addition, using a similar analytical approach to prior studies, no differences in imitation performance emerged when only the highest and lowest 20 participants on each trait variable were compared. These data weaken support for the view that stable personality traits are antecedents to automatic imitation and that neural mechanisms thought to support automatic imitation, such as the mirror neuron system, are dysfunctional in autism spectrum disorders or schizophrenia. In sum, the impact that personality variables have on automatic imitation is less universal than initial reports suggest.
Image processing and machine learning in the morphological analysis of blood cells.
Rodellar, J; Alférez, S; Acevedo, A; Molina, A; Merino, A
2018-05-01
This review focuses on how image processing and machine learning can be useful for the morphological characterization and automatic recognition of cell images captured from peripheral blood smears. The basics of the 3 core elements (segmentation, quantitative features, and classification) are outlined, and recent literature is discussed. Although red blood cells are a significant part of this context, this study focuses on malignant lymphoid cells and blast cells. There is no doubt that these technologies may help the cytologist to perform efficient, objective, and fast morphological analysis of blood cells. They may also help in the interpretation of some morphological features and may serve as learning and survey tools. Although research is still needed, it is important to define screening strategies to exploit the potential of image-based automatic recognition systems integrated in the daily routine of laboratories along with other analysis methodologies. © 2018 John Wiley & Sons Ltd.
Automatic cortical segmentation in the developing brain.
Xue, Hui; Srinivasan, Latha; Jiang, Shuzhou; Rutherford, Mary; Edwards, A David; Rueckert, Daniel; Hajnal, Jo V
2007-01-01
The segmentation of neonatal cortex from magnetic resonance (MR) images is much more challenging than the segmentation of cortex in adults. The main reason is the inverted contrast between grey matter (GM) and white matter (WM) that occurs when myelination is incomplete. This causes mislabeled partial volume voxels, especially at the interface between GM and cerebrospinal fluid (CSF). We propose a fully automatic cortical segmentation algorithm, detecting these mislabeled voxels using a knowledge-based approach and correcting errors by adjusting local priors to favor the correct classification. Our results show that the proposed algorithm corrects errors in the segmentation of both GM and WM compared to the classic EM scheme. The segmentation algorithm has been tested on 25 neonates with the gestational ages ranging from approximately 27 to 45 weeks. Quantitative comparison to the manual segmentation demonstrates good performance of the method (mean Dice similarity: 0.758 +/- 0.037 for GM and 0.794 +/- 0.078 for WM).