DOE Office of Scientific and Technical Information (OSTI.GOV)
III, FredOppel; Rigdon, J. Brian
2014-09-08
A collection of general Umbra modules that are reused by other Umbra libraries. These capabilities include line segments, file utilities, color utilities, string utilities (for std::string), list utilities (for std ::vector ), bounding box intersections, range limiters, simple filters, cubic roots solvers and a few other utilities.
Integrating segmentation methods from the Insight Toolkit into a visualization application.
Martin, Ken; Ibáñez, Luis; Avila, Lisa; Barré, Sébastien; Kaspersen, Jon H
2005-12-01
The Insight Toolkit (ITK) initiative from the National Library of Medicine has provided a suite of state-of-the-art segmentation and registration algorithms ideally suited to volume visualization and analysis. A volume visualization application that effectively utilizes these algorithms provides many benefits: it allows access to ITK functionality for non-programmers, it creates a vehicle for sharing and comparing segmentation techniques, and it serves as a visual debugger for algorithm developers. This paper describes the integration of image processing functionalities provided by the ITK into VolView, a visualization application for high performance volume rendering. A free version of this visualization application is publicly available and is available in the online version of this paper. The process for developing ITK plugins for VolView according to the publicly available API is described in detail, and an application of ITK VolView plugins to the segmentation of Abdominal Aortic Aneurysms (AAAs) is presented. The source code of the ITK plugins is also publicly available and it is included in the online version.
[Bioimpedometry and its utilization in dialysis therapy].
Lopot, František
2016-01-01
Measurement of living tissue impedance - bioimpedometry - started to be used in medicine some 50 years ago, first exclusively for estimation of extracellular and intracellular compartment volumes. Its most simple single frequency (50 kHz) version works directly with the measured impedance vector. Technically more sophisticated versions convert the measured impedance in values of volumes of different compartments of body fluids and calculate also principal markers of nutritional status (lean body mass, adipose tissue mass). The latest version specifically developed for application in dialysis patients includes body composition modelling and provides even absolute value of overhydration (excess fluid). Still in experimental phase is the bioimpedance exploitation for more precise estimation of residual glomerular filtration. Not yet standardized is also segmental bioimpedance measurement which should enable separate assessment of hydration status of the trunk segment and ultrafiltration capacity of peritoneum in peritoneal dialysis patients.Key words: assessment - bioimpedance - excess fluid - fluid status - glomerular filtration - haemodialysis - nutritional status - peritoneal dialysis.
A segmentation editing framework based on shape change statistics
NASA Astrophysics Data System (ADS)
Mostapha, Mahmoud; Vicory, Jared; Styner, Martin; Pizer, Stephen
2017-02-01
Segmentation is a key task in medical image analysis because its accuracy significantly affects successive steps. Automatic segmentation methods often produce inadequate segmentations, which require the user to manually edit the produced segmentation slice by slice. Because editing is time-consuming, an editing tool that enables the user to produce accurate segmentations by only drawing a sparse set of contours would be needed. This paper describes such a framework as applied to a single object. Constrained by the additional information enabled by the manually segmented contours, the proposed framework utilizes object shape statistics to transform the failed automatic segmentation to a more accurate version. Instead of modeling the object shape, the proposed framework utilizes shape change statistics that were generated to capture the object deformation from the failed automatic segmentation to its corresponding correct segmentation. An optimization procedure was used to minimize an energy function that consists of two terms, an external contour match term and an internal shape change regularity term. The high accuracy of the proposed segmentation editing approach was confirmed by testing it on a simulated data set based on 10 in-vivo infant magnetic resonance brain data sets using four similarity metrics. Segmentation results indicated that our method can provide efficient and adequately accurate segmentations (Dice segmentation accuracy increase of 10%), with very sparse contours (only 10%), which is promising in greatly decreasing the work expected from the user.
DOT National Transportation Integrated Search
2002-02-26
This document, the Introduction to the Enhanced Logistics Intratheater Support Tool (ELIST) Mission Application and its Segments, satisfies the following objectives: : It identifies the mission application, known in brief as ELIST, and all seven ...
Tălu, Stefan
2013-07-01
The purpose of this paper is to determine a quantitative assessment of the human retinal vascular network architecture for patients with diabetic macular edema (DME). Multifractal geometry and lacunarity parameters are used in this study. A set of 10 segmented and skeletonized human retinal images, corresponding to both normal (five images) and DME states of the retina (five images), from the DRIVE database was analyzed using the Image J software. Statistical analyses were performed using Microsoft Office Excel 2003 and GraphPad InStat software. The human retinal vascular network architecture has a multifractal geometry. The average of generalized dimensions (Dq) for q = 0, 1, 2 of the normal images (segmented versions), is similar to the DME cases (segmented versions). The average of generalized dimensions (Dq) for q = 0, 1 of the normal images (skeletonized versions), is slightly greater than the DME cases (skeletonized versions). However, the average of D2 for the normal images (skeletonized versions) is similar to the DME images. The average of lacunarity parameter, Λ, for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values for DME images (segmented and skeletonized versions). The multifractal and lacunarity analysis provides a non-invasive predictive complementary tool for an early diagnosis of patients with DME.
Holló, Gábor; Shu-Wei, Hsu; Naghizadeh, Farzaneh
2016-06-01
To compare the current (6.3) and a novel software version (6.12) of the RTVue-100 optical coherence tomograph (RTVue-OCT) for ganglion cell complex (GCC) and retinal nerve fiber layer thickness (RNFLT) image segmentation and detection of glaucoma in high myopia. RNFLT and GCC scans were acquired with software version 6.3 of the RTVue-OCT on 51 highly myopic eyes (spherical refractive error ≤-6.0 D) of 51 patients, and were analyzed with both the software versions. Twenty-two eyes were nonglaucomatous, 13 were ocular hypertensive and 16 eyes had glaucoma. No difference was seen for any RNFLT, and average GCC parameter between the software versions (paired t test, P≥0.084). Global loss volume was significantly lower (more normal) with version 6.12 than with version 6.3 (Wilcoxon signed-rank test, P<0.001). The percentage agreement (κ) between the clinical (normal and ocular hypertensive vs. glaucoma) and the software-provided classifications (normal and borderline vs. outside normal limits) were 0.3219 and 0.4442 for average RNFLT, and 0.2926 and 0.4977 for average GCC with versions 1 and 2, respectively (McNemar symmetry test, P≥0.289). No difference in average RNFLT and GCC classification (McNemar symmetry test, P≥0.727) and the number of eyes with at least 1 segmentation error (P≥0.109) was found between the software versions, respectively. Although GCC segmentation was improved with software version 6.12 compared with the current version in highly myopic eyes, this did not result in a significant change of the average RNFLT and GCC values, and did not significantly improve the software-provided classification for glaucoma.
Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis
Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina
2015-01-01
AIM To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. METHODS This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. RESULTS It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). CONCLUSION The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from healthy individuals. PMID:26309878
Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis.
Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina
2015-01-01
To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from healthy individuals.
Incorporating Edge Information into Best Merge Region-Growing Segmentation
NASA Technical Reports Server (NTRS)
Tilton, James C.; Pasolli, Edoardo
2014-01-01
We have previously developed a best merge region-growing approach that integrates nonadjacent region object aggregation with the neighboring region merge process usually employed in region growing segmentation approaches. This approach has been named HSeg, because it provides a hierarchical set of image segmentation results. Up to this point, HSeg considered only global region feature information in the region growing decision process. We present here three new versions of HSeg that include local edge information into the region growing decision process at different levels of rigor. We then compare the effectiveness and processing times of these new versions HSeg with each other and with the original version of HSeg.
Fast Segmentation From Blurred Data in 3D Fluorescence Microscopy.
Storath, Martin; Rickert, Dennis; Unser, Michael; Weinmann, Andreas
2017-10-01
We develop a fast algorithm for segmenting 3D images from linear measurements based on the Potts model (or piecewise constant Mumford-Shah model). To that end, we first derive suitable space discretizations of the 3D Potts model, which are capable of dealing with 3D images defined on non-cubic grids. Our discretization allows us to utilize a specific splitting approach, which results in decoupled subproblems of moderate size. The crucial point in the 3D setup is that the number of independent subproblems is so large that we can reasonably exploit the parallel processing capabilities of the graphics processing units (GPUs). Our GPU implementation is up to 18 times faster than the sequential CPU version. This allows to process even large volumes in acceptable runtimes. As a further contribution, we extend the algorithm in order to deal with non-negativity constraints. We demonstrate the efficiency of our method for combined image deconvolution and segmentation on simulated data and on real 3D wide field fluorescence microscopy data.
NASA Technical Reports Server (NTRS)
1991-01-01
The Reusable Reentry Satellite (RRS) System is composed of the payload segment (PS), vehicle segment (VS), and mission support (MS) segments. This specification establishes the performance, design, development, and test requirements for the RRS Rodent Module (RM).
Manual for Getdata Version 3.1: a FORTRAN Utility Program for Time History Data
NASA Technical Reports Server (NTRS)
Maine, Richard E.
1987-01-01
This report documents version 3.1 of the GetData computer program. GetData is a utility program for manipulating files of time history data, i.e., data giving the values of parameters as functions of time. The most fundamental capability of GetData is extracting selected signals and time segments from an input file and writing the selected data to an output file. Other capabilities include converting file formats, merging data from several input files, time skewing, interpolating to common output times, and generating calculated output signals as functions of the input signals. This report also documents the interface standards for the subroutines used by GetData to read and write the time history files. All interface to the data files is through these subroutines, keeping the main body of GetData independent of the precise details of the file formats. Different file formats can be supported by changes restricted to these subroutines. Other computer programs conforming to the interface standards can call the same subroutines to read and write files in compatible formats.
Semi-automatic geographic atrophy segmentation for SD-OCT images.
Chen, Qiang; de Sisternes, Luis; Leng, Theodore; Zheng, Luoluo; Kutzscher, Lauren; Rubin, Daniel L
2013-01-01
Geographic atrophy (GA) is a condition that is associated with retinal thinning and loss of the retinal pigment epithelium (RPE) layer. It appears in advanced stages of non-exudative age-related macular degeneration (AMD) and can lead to vision loss. We present a semi-automated GA segmentation algorithm for spectral-domain optical coherence tomography (SD-OCT) images. The method first identifies and segments a surface between the RPE and the choroid to generate retinal projection images in which the projection region is restricted to a sub-volume of the retina where the presence of GA can be identified. Subsequently, a geometric active contour model is employed to automatically detect and segment the extent of GA in the projection images. Two image data sets, consisting on 55 SD-OCT scans from twelve eyes in eight patients with GA and 56 SD-OCT scans from 56 eyes in 56 patients with GA, respectively, were utilized to qualitatively and quantitatively evaluate the proposed GA segmentation method. Experimental results suggest that the proposed algorithm can achieve high segmentation accuracy. The mean GA overlap ratios between our proposed method and outlines drawn in the SD-OCT scans, our method and outlines drawn in the fundus auto-fluorescence (FAF) images, and the commercial software (Carl Zeiss Meditec proprietary software, Cirrus version 6.0) and outlines drawn in FAF images were 72.60%, 65.88% and 59.83%, respectively.
Poly-Pattern Compressive Segmentation of ASTER Data for GIS
NASA Technical Reports Server (NTRS)
Myers, Wayne; Warner, Eric; Tutwiler, Richard
2007-01-01
Pattern-based segmentation of multi-band image data, such as ASTER, produces one-byte and two-byte approximate compressions. This is a dual segmentation consisting of nested coarser and finer level pattern mappings called poly-patterns. The coarser A-level version is structured for direct incorporation into geographic information systems in the manner of a raster map. GIs renderings of this A-level approximation are called pattern pictures which have the appearance of color enhanced images. The two-byte version consisting of thousands of B-level segments provides a capability for approximate restoration of the multi-band data in selected areas or entire scenes. Poly-patterns are especially useful for purposes of change detection and landscape analysis at multiple scales. The primary author has implemented the segmentation methodology in a public domain software suite.
Use of graph algorithms in the processing and analysis of images with focus on the biomedical data.
Zdimalova, M; Roznovjak, R; Weismann, P; El Falougy, H; Kubikova, E
2017-01-01
Image segmentation is a known problem in the field of image processing. A great number of methods based on different approaches to this issue was created. One of these approaches utilizes the findings of the graph theory. Our work focuses on segmentation using shortest paths in a graph. Specifically, we deal with methods of "Intelligent Scissors," which use Dijkstra's algorithm to find the shortest paths. We created a new software in Microsoft Visual Studio 2013 integrated development environment Visual C++ in the language C++/CLI. We created a format application with a graphical users development environment for system Windows, with using the platform .Net (version 4.5). The program was used for handling and processing the original medical data. The major disadvantage of the method of "Intelligent Scissors" is the computational time length of Dijkstra's algorithm. However, after the implementation of a more efficient priority queue, this problem could be alleviated. The main advantage of this method we see in training that enables to adapt to a particular kind of edge, which we need to segment. The user involvement has a significant influence on the process of segmentation, which enormously aids to achieve high-quality results (Fig. 7, Ref. 13).
Brief Report: Utilization of the First Biosimilar Infliximab Since Its Approval in South Korea.
Kim, Seoyoung C; Choi, Nam-Kyong; Lee, Joongyub; Kwon, Kyoung-Eun; Eddings, Wesley; Sung, Yoon-Kyoung; Ji Song, Hong; Kesselheim, Aaron S; Solomon, Daniel H
2016-05-01
The US Food and Drug Administration is considering an application for a biosimilar version of infliximab, which has been available in South Korea since November 2012. The aim of the present study was to examine the utilization patterns of both branded and biosimilar infliximab and other tumor necrosis factor (TNF) inhibitors in South Korea before and after the introduction of this biosimilar infliximab. Using claims data from April 2009 to March 2014 from the Korean Health Insurance Review and Assessment Service database, which includes the entire South Korean population, the number of claims for biosimilar infliximab was assessed. A segmented linear regression model was used to examine the utilization patterns of infliximab (the branded and biosimilar versions) and other TNF inhibitors (adalimumab and etanercept) before and after the introduction of the biosimilar infliximab. In total, 20,976 TNF inhibitor users were identified from the South Korean claims database, including 983 with a prescription claim for biosimilar infliximab. Among all of the claims for any version of infliximab, the proportion of biosimilar infliximab claims increased to 19% through March 2014. Before November 2012, each month there were 33 (95% confidence interval [95% CI] 32, 35) more infliximab claims, 44 (95% CI 40, 48) more etanercept claims, and 50 (95% CI 47, 53) more adalimumab claims. After November 2012, there were significant changes in the slopes for trend in usage, with additional increases in the use of branded and biosimilar infliximab (9 more claims per month, 95% CI 2, 17) and decreases in the use of etanercept (-52 claims per month, 95% CI -66, -38) and adalimumab (-21 claims per month, 95% CI -35, -6). During the first 15 months since its introduction in South Korea, one-fifth of all infliximab claims were for the biosimilar version. Introduction of biosimilar infliximab may affect the use of other TNF inhibitors, and the magnitude of change in usage will likely differ in other countries. © 2016, American College of Rheumatology.
Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC (version 4.0) technical manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1988-01-01
The information contained in the NASARC (Version 4.0) Technical Manual and NASARC (Version 4.0) User's Manual relates to the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbits. Array dimensions within the software were structured to fit within the currently available 12 megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.0) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.
Medical Image Segmentation by Combining Graph Cut and Oriented Active Appearance Models
Chen, Xinjian; Udupa, Jayaram K.; Bağcı, Ulaş; Zhuge, Ying; Yao, Jianhua
2017-01-01
In this paper, we propose a novel 3D segmentation method based on the effective combination of the active appearance model (AAM), live wire (LW), and graph cut (GC). The proposed method consists of three main parts: model building, initialization, and segmentation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the initialization part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW method, resulting in Oriented AAM (OAAM). A multi-object strategy is utilized to help in object initialization. We employ a pseudo-3D initialization strategy, and segment the organs slice by slice via multi-object OAAM method. For the segmentation part, a 3D shape constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT dataset and also tested on the MICCAI 2007 grand challenge for liver segmentation training dataset. The results show the following: (a) An overall segmentation accuracy of true positive volume fraction (TPVF) > 94.3%, false positive volume fraction (FPVF) < 0.2% can be achieved. (b) The initialization performance can be improved by combining AAM and LW. (c) The multi-object strategy greatly facilitates the initialization. (d) Compared to the traditional 3D AAM method, the pseudo 3D OAAM method achieves comparable performance while running 12 times faster. (e) The performance of proposed method is comparable to the state of the art liver segmentation algorithm. The executable version of 3D shape constrained GC with user interface can be downloaded from website http://xinjianchen.wordpress.com/research/. PMID:22311862
Image Information Mining Utilizing Hierarchical Segmentation
NASA Technical Reports Server (NTRS)
Tilton, James C.; Marchisio, Giovanni; Koperski, Krzysztof; Datcu, Mihai
2002-01-01
The Hierarchical Segmentation (HSEG) algorithm is an approach for producing high quality, hierarchically related image segmentations. The VisiMine image information mining system utilizes clustering and segmentation algorithms for reducing visual information in multispectral images to a manageable size. The project discussed herein seeks to enhance the VisiMine system through incorporating hierarchical segmentations from HSEG into the VisiMine system.
Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC), version 4.0: User's manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1988-01-01
The information in the NASARC (Version 4.0) Technical Manual (NASA-TM-101453) and NASARC (Version 4.0) User's Manual (NASA-TM-101454) relates to the state of Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbit. Array dimensions within the software were structured to fit within the currently available 12-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.
Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC, Version 2.0: User's Manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1987-01-01
The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and the NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through October 16, 1987. The technical manual describes the NASARC concept and the algorithms which are used to implement it. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions have been incorporated in the Version 2.0 software over prior versions. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit into the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time reducing computer run time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakravarti, D.; Hendrix, P.E.; Wilkie, W.L.
1987-01-01
Maturing markets and the accompanying increase in competition, sophistication of customers, and differentiation of products and services have forced companies to focus their marketing efforts on segments in which they can prosper. The experience in these companies has revealed that market segmentation, although simple in concept, is not so easily implemented. It is reasonable to anticipate substantial benefits from additional market segmentation within each of the classes traditionally distinguished in the industry - residential, commercial, and industrial. Segmentation is also likely to prove useful for utilities facing quite different marketing environments, e.g., in terms of demand patterns (number of customers,more » winter- and summer-peaking, etc.), capacity, and degree of regulatory and competitive pressures. Within utilities, those charged with developing and implementing segmentation strategies face some difficult issues. The primary objective of this monograph is to provide some answers to these questions. This monograph is intended to provide utility researchers with a guide to the design and execution of market segmentation research in utility markets. Several composite cases, drawn from actual studies conducted by electric utilities, are used to illustrate the discussion.« less
Reproducibility of myelin content-based human habenula segmentation at 3 Tesla.
Kim, Joo-Won; Naidich, Thomas P; Joseph, Joshmi; Nair, Divya; Glasser, Matthew F; O'halloran, Rafael; Doucet, Gaelle E; Lee, Won Hee; Krinsky, Hannah; Paulino, Alejandro; Glahn, David C; Anticevic, Alan; Frangou, Sophia; Xu, Junqian
2018-03-26
In vivo morphological study of the human habenula, a pair of small epithalamic nuclei adjacent to the dorsomedial thalamus, has recently gained significant interest for its role in reward and aversion processing. However, segmenting the habenula from in vivo magnetic resonance imaging (MRI) is challenging due to the habenula's small size and low anatomical contrast. Although manual and semi-automated habenula segmentation methods have been reported, the test-retest reproducibility of the segmented habenula volume and the consistency of the boundaries of habenula segmentation have not been investigated. In this study, we evaluated the intra- and inter-site reproducibility of in vivo human habenula segmentation from 3T MRI (0.7-0.8 mm isotropic resolution) using our previously proposed semi-automated myelin contrast-based method and its fully-automated version, as well as a previously published manual geometry-based method. The habenula segmentation using our semi-automated method showed consistent boundary definition (high Dice coefficient, low mean distance, and moderate Hausdorff distance) and reproducible volume measurement (low coefficient of variation). Furthermore, the habenula boundary in our semi-automated segmentation from 3T MRI agreed well with that in the manual segmentation from 7T MRI (0.5 mm isotropic resolution) of the same subjects. Overall, our proposed semi-automated habenula segmentation showed reliable and reproducible habenula localization, while its fully-automated version offers an efficient way for large sample analysis. © 2018 Wiley Periodicals, Inc.
WRIST: A WRist Image Segmentation Toolkit for carpal bone delineation from MRI.
Foster, Brent; Joshi, Anand A; Borgese, Marissa; Abdelhafez, Yasser; Boutin, Robert D; Chaudhari, Abhijit J
2018-01-01
Segmentation of the carpal bones from 3D imaging modalities, such as magnetic resonance imaging (MRI), is commonly performed for in vivo analysis of wrist morphology, kinematics, and biomechanics. This crucial task is typically carried out manually and is labor intensive, time consuming, subject to high inter- and intra-observer variability, and may result in topologically incorrect surfaces. We present a method, WRist Image Segmentation Toolkit (WRIST), for 3D semi-automated, rapid segmentation of the carpal bones of the wrist from MRI. In our method, the boundary of the bones were iteratively found using prior known anatomical constraints and a shape-detection level set. The parameters of the method were optimized using a training dataset of 48 manually segmented carpal bones and evaluated on 112 carpal bones which included both healthy participants without known wrist conditions and participants with thumb basilar osteoarthritis (OA). Manual segmentation by two expert human observers was considered as a reference. On the healthy subject dataset we obtained a Dice overlap of 93.0 ± 3.8, Jaccard Index of 87.3 ± 6.2, and a Hausdorff distance of 2.7 ± 3.4 mm, while on the OA dataset we obtained a Dice overlap of 90.7 ± 8.6, Jaccard Index of 83.0 ± 10.6, and a Hausdorff distance of 4.0 ± 4.4 mm. The short computational time of 20.8 s per bone (or 5.1 s per bone in the parallelized version) and the high agreement with the expert observers gives WRIST the potential to be utilized in musculoskeletal research. Copyright © 2017 Elsevier Ltd. All rights reserved.
Numerical arc segmentation algorithm for a radio conference-NASARC (version 2.0) technical manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1987-01-01
The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of NASARC software development through October 16, 1987. The Technical Manual describes the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operating instructions. Significant revisions have been incorporated in the Version 2.0 software. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit within the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time effecting an overall reduction in computer run time.
Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin
2013-01-01
Automated image analysis of histopathology specimens could potentially provide support for early detection and improved characterization of breast cancer. Automated segmentation of the cells comprising imaged tissue microarrays (TMA) is a prerequisite for any subsequent quantitative analysis. Unfortunately, crowding and overlapping of cells present significant challenges for most traditional segmentation algorithms. In this paper, we propose a novel algorithm which can reliably separate touching cells in hematoxylin stained breast TMA specimens which have been acquired using a standard RGB camera. The algorithm is composed of two steps. It begins with a fast, reliable object center localization approach which utilizes single-path voting followed by mean-shift clustering. Next, the contour of each cell is obtained using a level set algorithm based on an interactive model. We compared the experimental results with those reported in the most current literature. Finally, performance was evaluated by comparing the pixel-wise accuracy provided by human experts with that produced by the new automated segmentation algorithm. The method was systematically tested on 234 image patches exhibiting dense overlap and containing more than 2200 cells. It was also tested on whole slide images including blood smears and tissue microarrays containing thousands of cells. Since the voting step of the seed detection algorithm is well suited for parallelization, a parallel version of the algorithm was implemented using graphic processing units (GPU) which resulted in significant speed-up over the C/C++ implementation. PMID:22167559
Bizuneh, Kelemu Dessie
2017-01-01
HIV/AIDS affects the basic educational sector which is the most productive segment of the population and vital to the creation of human capital. The loss of skilled and experienced teachers due to the problem is increasingly compromising the provision of quality education in most African countries. The study was proposed to determine the magnitude of VCT utilization and assess contributing factors that affect VCT service utilization among secondary school teachers in Awi Zone. A cross-sectional study design was conducted among 588 participants in 2014. Self-administered questionnaire was used to collect data. Data was analyzed using SPSS version 16, presented as frequencies and summary statistics, and tested for presence of significant association with odds ratio at 95% CI. More than half (53.6%) of study participants were tested for HIV. Those who had sexual intercourse, had good knowledge about VCT, were divorced/widowed, were in the age group of 20–29 years, and were married utilized VCT services two, three, four, three, and two times better than their counterparts, respectively. Actions targeting unmarried status, increase of educational level, and teachers with age groups above 30 years are necessary to follow their counterparts to utilize VCT service in order to save loss of teachers. PMID:28512582
Orthographic Transparency Enhances Morphological Segmentation in Children Reading Hebrew Words.
Haddad, Laurice; Weiss, Yael; Katzir, Tami; Bitan, Tali
2017-01-01
Morphological processing of derived words develops simultaneously with reading acquisition. However, the reader's engagement in morphological segmentation may depend on the language morphological richness and orthographic transparency, and the readers' reading skills. The current study tested the common idea that morphological segmentation is enhanced in non-transparent orthographies to compensate for the absence of phonological information. Hebrew's rich morphology and the dual version of the Hebrew script (with and without diacritic marks) provides an opportunity to study the interaction of orthographic transparency and morphological segmentation on the development of reading skills in a within-language design. Hebrew speaking 2nd ( N = 27) and 5th ( N = 29) grade children read aloud 96 noun words. Half of the words were simple mono-morphemic words and half were bi-morphemic derivations composed of a productive root and a morphemic pattern. In each list half of the words were presented in the transparent version of the script (with diacritic marks), and half in the non-transparent version (without diacritic marks). Our results show that in both groups, derived bi-morphemic words were identified more accurately than mono-morphemic words, but only for the transparent, pointed, script. For the un-pointed script the reverse was found, namely, that bi-morphemic words were read less accurately than mono-morphemic words, especially in second grade. Second grade children also read mono-morphemic words faster than bi-morphemic words. Finally, correlations with a standardized measure of morphological awareness were found only for second grade children, and only in bi-morphemic words. These results, showing greater morphological effects in second grade compared to fifth grade children suggest that for children raised in a language with a rich morphology, common and easily segmented morphemic units may be more beneficial for younger compared to older readers. Moreover, in contrast to the common hypothesis, our results show that morphemic segmentation does not compensate for the missing phonological information in a non-transparent orthography, but rather that morphological segmentation is most beneficial in the highly transparent script. These results are consistent with the idea that morphological and phonological segmentation processes occur simultaneously and do not constitute alternative pathways to visual word recognition.
Orthographic Transparency Enhances Morphological Segmentation in Children Reading Hebrew Words
Haddad, Laurice; Weiss, Yael; Katzir, Tami; Bitan, Tali
2018-01-01
Morphological processing of derived words develops simultaneously with reading acquisition. However, the reader’s engagement in morphological segmentation may depend on the language morphological richness and orthographic transparency, and the readers’ reading skills. The current study tested the common idea that morphological segmentation is enhanced in non-transparent orthographies to compensate for the absence of phonological information. Hebrew’s rich morphology and the dual version of the Hebrew script (with and without diacritic marks) provides an opportunity to study the interaction of orthographic transparency and morphological segmentation on the development of reading skills in a within-language design. Hebrew speaking 2nd (N = 27) and 5th (N = 29) grade children read aloud 96 noun words. Half of the words were simple mono-morphemic words and half were bi-morphemic derivations composed of a productive root and a morphemic pattern. In each list half of the words were presented in the transparent version of the script (with diacritic marks), and half in the non-transparent version (without diacritic marks). Our results show that in both groups, derived bi-morphemic words were identified more accurately than mono-morphemic words, but only for the transparent, pointed, script. For the un-pointed script the reverse was found, namely, that bi-morphemic words were read less accurately than mono-morphemic words, especially in second grade. Second grade children also read mono-morphemic words faster than bi-morphemic words. Finally, correlations with a standardized measure of morphological awareness were found only for second grade children, and only in bi-morphemic words. These results, showing greater morphological effects in second grade compared to fifth grade children suggest that for children raised in a language with a rich morphology, common and easily segmented morphemic units may be more beneficial for younger compared to older readers. Moreover, in contrast to the common hypothesis, our results show that morphemic segmentation does not compensate for the missing phonological information in a non-transparent orthography, but rather that morphological segmentation is most beneficial in the highly transparent script. These results are consistent with the idea that morphological and phonological segmentation processes occur simultaneously and do not constitute alternative pathways to visual word recognition. PMID:29403413
Segmentation Fusion Techniques with Application to Plenoptic Images: A Survey.
NASA Astrophysics Data System (ADS)
Evin, D.; Hadad, A.; Solano, A.; Drozdowicz, B.
2016-04-01
The segmentation of anatomical and pathological structures plays a key role in the characterization of clinically relevant evidence from digital images. Recently, plenoptic imaging has emerged as a new promise to enrich the diagnostic potential of conventional photography. Since the plenoptic images comprises a set of slightly different versions of the target scene, we propose to make use of those images to improve the segmentation quality in relation to the scenario of a single image segmentation. The problem of finding a segmentation solution from multiple images of a single scene, is called segmentation fusion. This paper reviews the issue of segmentation fusion in order to find solutions that can be applied to plenoptic images, particularly images from the ophthalmological domain.
Validation of automated white matter hyperintensity segmentation.
Smart, Sean D; Firbank, Michael J; O'Brien, John T
2011-01-01
Introduction. White matter hyperintensities (WMHs) are a common finding on MRI scans of older people and are associated with vascular disease. We compared 3 methods for automatically segmenting WMHs from MRI scans. Method. An operator manually segmented WMHs on MRI images from a 3T scanner. The scans were also segmented in a fully automated fashion by three different programmes. The voxel overlap between manual and automated segmentation was compared. Results. Between observer overlap ratio was 63%. Using our previously described in-house software, we had overlap of 62.2%. We investigated the use of a modified version of SPM segmentation; however, this was not successful, with only 14% overlap. Discussion. Using our previously reported software, we demonstrated good segmentation of WMHs in a fully automated fashion.
de Siqueira, Alexandre Fioravante; Cabrera, Flávio Camargo; Nakasuga, Wagner Massayuki; Pagamisse, Aylton; Job, Aldo Eloizo
2018-01-01
Image segmentation, the process of separating the elements within a picture, is frequently used for obtaining information from photomicrographs. Segmentation methods should be used with reservations, since incorrect results can mislead when interpreting regions of interest (ROI). This decreases the success rate of extra procedures. Multi-Level Starlet Segmentation (MLSS) and Multi-Level Starlet Optimal Segmentation (MLSOS) were developed to be an alternative for general segmentation tools. These methods gave rise to Jansen-MIDAS, an open-source software. A scientist can use it to obtain several segmentations of hers/his photomicrographs. It is a reliable alternative to process different types of photomicrographs: previous versions of Jansen-MIDAS were used to segment ROI in photomicrographs of two different materials, with an accuracy superior to 89%. © 2017 Wiley Periodicals, Inc.
Scale model test results of several STOVL ventral nozzle concepts
NASA Technical Reports Server (NTRS)
Meyer, B. E.; Re, R. J.; Yetter, J. A.
1991-01-01
Short take-off and vertical landing (STOVL) ventral nozzle concepts are investigated by means of a static cold flow scale model at a NASA facility. The internal aerodynamic performance characteristics of the cruise, transition, and vertical lift modes are considered for four ventral nozzle types. The nozzle configurations examined include those with: butterfly-type inner doors and vectoring exit vanes; circumferential inner doors and thrust vectoring vanes; a three-port segmented version with circumferential inner doors; and a two-port segmented version with cylindrical nozzle exit shells. During the testing, internal and external pressure is measured, and the thrust and flow coefficients and resultant vector angles are obtained. The inner door used for ventral nozzle flow control is found to affect performance negatively during the initial phase of transition. The best thrust performance is demonstrated by the two-port segmented ventral nozzle due to the elimination of the inner door.
Exploring the Constraint Profile of Winter Sports Resort Tourist Segments.
Priporas, Constantinos-Vasilios; Vassiliadis, Chris A; Bellou, Victoria; Andronikidis, Andreas
2015-09-01
Many studies have confirmed the importance of market segmentation both theoretically and empirically. Surprisingly though, no study has so far addressed the issue from the perspective of leisure constraints. Since different consumers face different barriers, we look at participation in leisure activities as an outcome of the negotiation process that winter sports resort tourists go through, to balance between related motives and constraints. This empirical study reports the findings on the applicability of constraining factors in segmenting the tourists who visit winter sports resorts. Utilizing data from 1,391 tourists of winter sports resorts in Greece, five segments were formed based on their constraint, demographic, and behavioral profile. Our findings indicate that such segmentation sheds light on factors that could potentially limit the full utilization of the market. To maximize utilization, we suggest customizing marketing to the profile of each distinct winter sports resort tourist segment that emerged.
Exploring the Constraint Profile of Winter Sports Resort Tourist Segments
Priporas, Constantinos-Vasilios; Vassiliadis, Chris A.; Bellou, Victoria; Andronikidis, Andreas
2014-01-01
Many studies have confirmed the importance of market segmentation both theoretically and empirically. Surprisingly though, no study has so far addressed the issue from the perspective of leisure constraints. Since different consumers face different barriers, we look at participation in leisure activities as an outcome of the negotiation process that winter sports resort tourists go through, to balance between related motives and constraints. This empirical study reports the findings on the applicability of constraining factors in segmenting the tourists who visit winter sports resorts. Utilizing data from 1,391 tourists of winter sports resorts in Greece, five segments were formed based on their constraint, demographic, and behavioral profile. Our findings indicate that such segmentation sheds light on factors that could potentially limit the full utilization of the market. To maximize utilization, we suggest customizing marketing to the profile of each distinct winter sports resort tourist segment that emerged. PMID:29708114
Enhancing atlas based segmentation with multiclass linear classifiers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sdika, Michaël, E-mail: michael.sdika@creatis.insa-lyon.fr
Purpose: To present a method to enrich atlases for atlas based segmentation. Such enriched atlases can then be used as a single atlas or within a multiatlas framework. Methods: In this paper, machine learning techniques have been used to enhance the atlas based segmentation approach. The enhanced atlas defined in this work is a pair composed of a gray level image alongside an image of multiclass classifiers with one classifier per voxel. Each classifier embeds local information from the whole training dataset that allows for the correction of some systematic errors in the segmentation and accounts for the possible localmore » registration errors. The authors also propose to use these images of classifiers within a multiatlas framework: results produced by a set of such local classifier atlases can be combined using a label fusion method. Results: Experiments have been made on the in vivo images of the IBSR dataset and a comparison has been made with several state-of-the-art methods such as FreeSurfer and the multiatlas nonlocal patch based method of Coupé or Rousseau. These experiments show that their method is competitive with state-of-the-art methods while having a low computational cost. Further enhancement has also been obtained with a multiatlas version of their method. It is also shown that, in this case, nonlocal fusion is unnecessary. The multiatlas fusion can therefore be done efficiently. Conclusions: The single atlas version has similar quality as state-of-the-arts multiatlas methods but with the computational cost of a naive single atlas segmentation. The multiatlas version offers a improvement in quality and can be done efficiently without a nonlocal strategy.« less
Validation of Automated White Matter Hyperintensity Segmentation
Smart, Sean D.; Firbank, Michael J.; O'Brien, John T.
2011-01-01
Introduction. White matter hyperintensities (WMHs) are a common finding on MRI scans of older people and are associated with vascular disease. We compared 3 methods for automatically segmenting WMHs from MRI scans. Method. An operator manually segmented WMHs on MRI images from a 3T scanner. The scans were also segmented in a fully automated fashion by three different programmes. The voxel overlap between manual and automated segmentation was compared. Results. Between observer overlap ratio was 63%. Using our previously described in-house software, we had overlap of 62.2%. We investigated the use of a modified version of SPM segmentation; however, this was not successful, with only 14% overlap. Discussion. Using our previously reported software, we demonstrated good segmentation of WMHs in a fully automated fashion. PMID:21904678
A User''s Guide to the Zwikker-Kosten Transmission Line Code (ZKTL)
NASA Technical Reports Server (NTRS)
Kelly, J. J.; Abu-Khajeel, H.
1997-01-01
This user's guide documents updates to the Zwikker-Kosten Transmission Line Code (ZKTL). This code was developed for analyzing new liner concepts developed to provide increased sound absorption. Contiguous arrays of multi-degree-of-freedom (MDOF) liner elements serve as the model for these liner configurations, and Zwikker and Kosten's theory of sound propagation in channels is used to predict the surface impedance. Transmission matrices for the various liner elements incorporate both analytical and semi-empirical methods. This allows standard matrix techniques to be employed in the code to systematically calculate the composite impedance due to the individual liner elements. The ZKTL code consists of four independent subroutines: 1. Single channel impedance calculation - linear version (SCIC) 2. Single channel impedance calculation - nonlinear version (SCICNL) 3. Multi-channel, multi-segment, multi-layer impedance calculation - linear version (MCMSML) 4. Multi-channel, multi-segment, multi-layer impedance calculation - nonlinear version (MCMSMLNL) Detailed examples, comments, and explanations for each liner impedance computation module are included. Also contained in the guide are depictions of the interactive execution, input files and output files.
Medical image segmentation by combining graph cuts and oriented active appearance models.
Chen, Xinjian; Udupa, Jayaram K; Bagci, Ulas; Zhuge, Ying; Yao, Jianhua
2012-04-01
In this paper, we propose a novel method based on a strategic combination of the active appearance model (AAM), live wire (LW), and graph cuts (GCs) for abdominal 3-D organ segmentation. The proposed method consists of three main parts: model building, object recognition, and delineation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the recognition part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW methods, resulting in the oriented AAM (OAAM). A multiobject strategy is utilized to help in object initialization. We employ a pseudo-3-D initialization strategy and segment the organs slice by slice via a multiobject OAAM method. For the object delineation part, a 3-D shape-constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT data set and also on the MICCAI 2007 Grand Challenge liver data set. The results show the following: 1) The overall segmentation accuracy of true positive volume fraction TPVF > 94.3% and false positive volume fraction can be achieved; 2) the initialization performance can be improved by combining the AAM and LW; 3) the multiobject strategy greatly facilitates initialization; 4) compared with the traditional 3-D AAM method, the pseudo-3-D OAAM method achieves comparable performance while running 12 times faster; and 5) the performance of the proposed method is comparable to state-of-the-art liver segmentation algorithm. The executable version of the 3-D shape-constrained GC method with a user interface can be downloaded from http://xinjianchen.wordpress.com/research/.
Developing a Procedure for Segmenting Meshed Heat Networks of Heat Supply Systems without Outflows
NASA Astrophysics Data System (ADS)
Tokarev, V. V.
2018-06-01
The heat supply systems of cities have, as a rule, a ring structure with the possibility of redistributing the flows. Despite the fact that a ring structure is more reliable than a radial one, the operators of heat networks prefer to use them in normal modes according to the scheme without overflows of the heat carrier between the heat mains. With such a scheme, it is easier to adjust the networks and to detect and locate faults in them. The article proposes a formulation of the heat network segmenting problem. The problem is set in terms of optimization with the heat supply system's excessive hydraulic power used as the optimization criterion. The heat supply system computer model has a hierarchically interconnected multilevel structure. Since iterative calculations are only carried out for the level of trunk heat networks, decomposing the entire system into levels allows the dimensionality of the solved subproblems to be reduced by an order of magnitude. An attempt to solve the problem by fully enumerating possible segmentation versions does not seem to be feasible for systems of really existing sizes. The article suggests a procedure for searching rational segmentation of heat supply networks with limiting the search to versions of dividing the system into segments near the flow convergence nodes with subsequent refining of the solution. The refinement is performed in two stages according to the total excess hydraulic power criterion. At the first stage, the loads are redistributed among the sources. After that, the heat networks are divided into independent fragments, and the possibility of increasing the excess hydraulic power in the obtained fragments is checked by shifting the division places inside a fragment. The proposed procedure has been approbated taking as an example a municipal heat supply system involving six heat mains fed from a common source, 24 loops within the feeding mains plane, and more than 5000 consumers. Application of the proposed segmentation procedure made it possible to find a version with required hydraulic power in the heat supply system on 3% less than the one found using the simultaneous segmentation method.
Pre-Calculus California Content Standards: Standards Deconstruction Project. Version 1.0
ERIC Educational Resources Information Center
Arnold, Bruce; Cliffe, Karen; Cubillo, Judy; Kracht, Brenda; Leaf, Abi; Legner, Mary; McGinity, Michelle; Orr, Michael; Rocha, Mario; Ross, Judy; Teegarden, Terrie; Thomson, Sarah; Villero, Geri
2008-01-01
This project was coordinated and funded by the California Partnership for Achieving Student Success (Cal-PASS). Cal-PASS is a data sharing system linking all segments of education. Its purpose is to improve student transition and success from one educational segment to the next. Cal-PASS' standards deconstruction project was initiated by the…
NASA Technical Reports Server (NTRS)
Hartz, Leslie
1994-01-01
Tool helps worker grip and move along large, smooth structure with no handgrips or footholds. Adheres to surface but easily released by actuating simple mechanism. Includes handle and segmented contact-adhesive pad. Bulk of pad made of soft plastic foam conforming to surface of structure. Each segment reinforced with rib. In sticking mode, ribs braced by side catches. In peeling mode, side catches retracted, and segmented adhesive pad loses its stiffness. Modified versions useful in inspecting hulls of ships and scaling walls in rescue operations.
Analysis of normal human retinal vascular network architecture using multifractal geometry
Ţălu, Ştefan; Stach, Sebastian; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina; Nicoară, Simona Delia
2017-01-01
AIM To apply the multifractal analysis method as a quantitative approach to a comprehensive description of the microvascular network architecture of the normal human retina. METHODS Fifty volunteers were enrolled in this study in the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and January 2014. A set of 100 segmented and skeletonised human retinal images, corresponding to normal states of the retina were studied. An automatic unsupervised method for retinal vessel segmentation was applied before multifractal analysis. The multifractal analysis of digital retinal images was made with computer algorithms, applying the standard box-counting method. Statistical analyses were performed using the GraphPad InStat software. RESULTS The architecture of normal human retinal microvascular network was able to be described using the multifractal geometry. The average of generalized dimensions (Dq) for q=0, 1, 2, the width of the multifractal spectrum (Δα=αmax − αmin) and the spectrum arms' heights difference (|Δf|) of the normal images were expressed as mean±standard deviation (SD): for segmented versions, D0=1.7014±0.0057; D1=1.6507±0.0058; D2=1.5772±0.0059; Δα=0.92441±0.0085; |Δf|= 0.1453±0.0051; for skeletonised versions, D0=1.6303±0.0051; D1=1.6012±0.0059; D2=1.5531±0.0058; Δα=0.65032±0.0162; |Δf|= 0.0238±0.0161. The average of generalized dimensions (Dq) for q=0, 1, 2, the width of the multifractal spectrum (Δα) and the spectrum arms' heights difference (|Δf|) of the segmented versions was slightly greater than the skeletonised versions. CONCLUSION The multifractal analysis of fundus photographs may be used as a quantitative parameter for the evaluation of the complex three-dimensional structure of the retinal microvasculature as a potential marker for early detection of topological changes associated with retinal diseases. PMID:28393036
Bit by Bit or All at Once? Splitting up the Inquiry Task to Promote Children's Scientific Reasoning
ERIC Educational Resources Information Center
Lazonder, Ard W.; Kamp, Ellen
2012-01-01
This study examined whether and why assigning children to a segmented inquiry task makes their investigations more productive. Sixty-one upper elementary-school pupils engaged in a simulation-based inquiry assignment either received a multivariable inquiry task (n = 21), a segmented version of this task that addressed the variables in successive…
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2008-08-01
This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.
NASA Technical Reports Server (NTRS)
Glasser, M. E.
1981-01-01
The Multilevel Diffusion Model (MDM) Version 5 was modified to include features of more recent versions. The MDM was used to predict in-cloud HCl concentrations for the April 12 launch of the space Shuttle (STS-1). The maximum centerline predictions were compared with measurements of maximum gaseous HCl obtained from aircraft passes through two segments of the fragmented shuttle ground cloud. The model over-predicted the maximum values for gaseous HCl in the lower cloud segment and portrayed the same rate of decay with time as the observed values. However, the decay with time of HCl maximum predicted by the MDM was more rapid than the observed decay for the higher cloud segment, causing the model to under-predict concentrations which were measured late in the life of the cloud. The causes of the tendency for the MDM to be conservative in over-estimating the HCl concentrations in the one case while tending to under-predict concentrations in the other case are discussed.
Mizukami, Naoki; Clark, Martyn P.; Sampson, Kevin; Nijssen, Bart; Mao, Yixin; McMillan, Hilary; Viger, Roland; Markstrom, Steven; Hay, Lauren E.; Woods, Ross; Arnold, Jeffrey R.; Brekke, Levi D.
2016-01-01
This paper describes the first version of a stand-alone runoff routing tool, mizuRoute. The mizuRoute tool post-processes runoff outputs from any distributed hydrologic model or land surface model to produce spatially distributed streamflow at various spatial scales from headwater basins to continental-wide river systems. The tool can utilize both traditional grid-based river network and vector-based river network data. Both types of river network include river segment lines and the associated drainage basin polygons, but the vector-based river network can represent finer-scale river lines than the grid-based network. Streamflow estimates at any desired location in the river network can be easily extracted from the output of mizuRoute. The routing process is simulated as two separate steps. First, hillslope routing is performed with a gamma-distribution-based unit-hydrograph to transport runoff from a hillslope to a catchment outlet. The second step is river channel routing, which is performed with one of two routing scheme options: (1) a kinematic wave tracking (KWT) routing procedure; and (2) an impulse response function – unit-hydrograph (IRF-UH) routing procedure. The mizuRoute tool also includes scripts (python, NetCDF operators) to pre-process spatial river network data. This paper demonstrates mizuRoute's capabilities to produce spatially distributed streamflow simulations based on river networks from the United States Geological Survey (USGS) Geospatial Fabric (GF) data set in which over 54 000 river segments and their contributing areas are mapped across the contiguous United States (CONUS). A brief analysis of model parameter sensitivity is also provided. The mizuRoute tool can assist model-based water resources assessments including studies of the impacts of climate change on streamflow.
NASA Technical Reports Server (NTRS)
Womble, M. E.; Potter, J. E.
1975-01-01
A prefiltering version of the Kalman filter is derived for both discrete and continuous measurements. The derivation consists of determining a single discrete measurement that is equivalent to either a time segment of continuous measurements or a set of discrete measurements. This prefiltering version of the Kalman filter easily handles numerical problems associated with rapid transients and ill-conditioned Riccati matrices. Therefore, the derived technique for extrapolating the Riccati matrix from one time to the next constitutes a new set of integration formulas which alleviate ill-conditioning problems associated with continuous Riccati equations. Furthermore, since a time segment of continuous measurements is converted into a single discrete measurement, Potter's square root formulas can be used to update the state estimate and its error covariance matrix. Therefore, if having the state estimate and its error covariance matrix at discrete times is acceptable, the prefilter extends square root filtering with all its advantages, to continuous measurement problems.
Schnettler, Berta; Grunert, Klaus G; Miranda-Zapata, Edgardo; Orellana, Ligia; Sepúlveda, José; Lobos, Germán; Hueche, Clementina; Höger, Yesli
2017-06-01
The aims of this study were to test the relationships between food neophobia, satisfaction with food-related life and food technology neophobia, distinguishing consumer segments according to these variables and characterizing them according to willingness to purchase food produced with novel technologies. A survey was conducted with 372 university students (mean aged=20.4years, SD=2.4). The questionnaire included the Abbreviated version of the Food Technology Neophobia Scale (AFTNS), Satisfaction with Life Scale (SWLS), and a 6-item version of the Food Neophobia Scale (FNS). Using confirmatory factor analysis, it was confirmed that SWFL correlated inversely with FNS, whereas FNS correlated inversely with AFTNS. No relationship was found between SWFL and AFTNS. Two main segments were identified using cluster analysis; these segments differed according to gender and family size. Group 1 (57.8%) possessed higher AFTNS and FNS scores than Group 2 (28.5%). However, these groups did not differ in their SWFL scores. Group 1 was less willing to purchase foods produced with new technologies than Group 2. The AFTNS and the 6-item version of the FNS are suitable instruments to measure acceptance of foods produced using new technologies in South American developing countries. The AFTNS constitutes a parsimonious alternative for the international study of food technology neophobia. Copyright © 2017 Elsevier Ltd. All rights reserved.
1983-09-01
6ENFRAL. ELECTROMAGNETIC MODEL FOR THE ANALYSIS OF COMPLEX SYSTEMS **%(GEMA CS) Computer Code Documentation ii( Version 3 ). A the BDM Corporation Dr...ANALYSIS FnlTcnclRpr F COMPLEX SYSTEM (GmCS) February 81 - July 83- I TR CODE DOCUMENTATION (Version 3 ) 6.PROMN N.REPORT NUMBER 5. CONTRACT ORGAT97...the ti and t2 directions on the source patch. 3 . METHOD: The electric field at a segment observation point due to the source patch j is given by 1-- lnA
Barriers to Research Utilization Scale: psychometric properties of the Turkish version.
Temel, Ayla Bayik; Uysal, Aynur; Ardahan, Melek; Ozkahraman, Sukran
2010-02-01
This paper is report of a study designed to assess the psychometric properties of the Turkish version of the Barriers to Research Utilization Scale. The original Barriers to Research Utilization Scale was developed by Funk et al. in the United States of America. Many researchers in various countries have used this scale to identify barriers to research utilization. A methodological study was carried out at four hospitals. The sample consisted of 300 nurses. Data were collected in 2005 using a socio-demographic form (12 questions) and the Turkish version of the Barriers to Research Utilization Scale. A Likert-type scale composed of four sub-factors and 29 items was used. Means and standard deviations were calculated for interval level data. A P value of <0.05 was considered statistically significant. Language equivalence and content validity were assessed by eight experts. Confirmatory factor analysis revealed that the Turkish version was made up of four subscales. Internal consistency reliability coefficient was 0.92 for the total scale and ranged from 0.73 to 0.80 for the subscales. Total-item correlation coefficients ranged from 0.37 to 0.60. The Turkish version of the scale is similar in structure to the original English language scale.
Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R.
2013-01-01
Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. PMID:23415254
Joint graph cut and relative fuzzy connectedness image segmentation algorithm.
Ciesielski, Krzysztof Chris; Miranda, Paulo A V; Falcão, Alexandre X; Udupa, Jayaram K
2013-12-01
We introduce an image segmentation algorithm, called GC(sum)(max), which combines, in novel manner, the strengths of two popular algorithms: Relative Fuzzy Connectedness (RFC) and (standard) Graph Cut (GC). We show, both theoretically and experimentally, that GC(sum)(max) preserves robustness of RFC with respect to the seed choice (thus, avoiding "shrinking problem" of GC), while keeping GC's stronger control over the problem of "leaking though poorly defined boundary segments." The analysis of GC(sum)(max) is greatly facilitated by our recent theoretical results that RFC can be described within the framework of Generalized GC (GGC) segmentation algorithms. In our implementation of GC(sum)(max) we use, as a subroutine, a version of RFC algorithm (based on Image Forest Transform) that runs (provably) in linear time with respect to the image size. This results in GC(sum)(max) running in a time close to linear. Experimental comparison of GC(sum)(max) to GC, an iterative version of RFC (IRFC), and power watershed (PW), based on a variety medical and non-medical images, indicates superior accuracy performance of GC(sum)(max) over these other methods, resulting in a rank ordering of GC(sum)(max)>PW∼IRFC>GC. Copyright © 2013 Elsevier B.V. All rights reserved.
Remote, non-contacting personnel bio-identification using microwave radiation
NASA Technical Reports Server (NTRS)
McGrath, William R. (Inventor); Talukder, Ashit (Inventor)
2011-01-01
A system to remotely identify a person by utilizing a microwave cardiogram, where some embodiments segment a signal representing cardiac beats into segments, extract features from the segments, and perform pattern identification of the segments and features with a pre-existing data set. Other embodiments are described and claimed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werner, Mike
Why this utility? After years of upgrading the Java Runtime Environment (JRE) or the Java Software Development Kit (JDK/SDK), a Windows computer becomes littered with so many old versions that the machine may become a security risk due to exploits targeted at those older versions. This utility helps mitigate those vulnerabilities by searching for, and removing, versions 1.3.x thru 1.7.x of the Java JRE and/or JDK/SDK.
On a methodology for robust segmentation of nonideal iris images.
Schmid, Natalia A; Zuo, Jinyu
2010-06-01
Iris biometric is one of the most reliable biometrics with respect to performance. However, this reliability is a function of the ideality of the data. One of the most important steps in processing nonideal data is reliable and precise segmentation of the iris pattern from remaining background. In this paper, a segmentation methodology that aims at compensating various nonidealities contained in iris images during segmentation is proposed. The virtue of this methodology lies in its capability to reliably segment nonideal imagery that is simultaneously affected with such factors as specular reflection, blur, lighting variation, occlusion, and off-angle images. We demonstrate the robustness of our segmentation methodology by evaluating ideal and nonideal data sets, namely, the Chinese Academy of Sciences iris data version 3 interval subdirectory, the iris challenge evaluation data, the West Virginia University (WVU) data, and the WVU off-angle data. Furthermore, we compare our performance to that of our implementation of Camus and Wildes's algorithm and Masek's algorithm. We demonstrate considerable improvement in segmentation performance over the formerly mentioned algorithms.
Lang, Irene M
2018-05-23
Guidelines and recommendations are designed to guide physicians in making decisions in daily practice. Guidelines provide a condensed summary of all available evidence at the time of the writing process. Recommendations take into account the risk-benefit ratio of particular diagnostic or therapeutic means and the impact on outcome, but not monetary or political considerations. Guidelines are not substitutes but are complementary to textbooks and cover the European Society of Cardiology (ESC) core curriculum topics. The level of evidence and the strength of recommendations of particular treatment options were recently newly weighted and graded according to predefined scales. Guidelines endorsement and implementation strategies are based on abridged pocket guidelines versions, electronic version for digital applications, translations into the national languages or extracts with reference to main changes since the last version. The present article represents a condensed summary of new and practically relevant items contained in the 2017 European Society of Cardiology (ESC) guidelines for the management of acute myocardial infarction in patients with ST-segment elevation, with reference to key citations.
SLS Pathfinder Segments Car Train Departure
2016-03-02
An Iowa Northern locomotive, contracted by Goodloe Transportation of Chicago, departs from NASA’s Kennedy Space Center in Florida, with two containers on railcars for transport to the Jay Jay railroad yard. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the Rotation, Processing and Surge Facility (RPSF). Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.
SLS Pathfinder Segments Car Train Departure
2016-03-02
An Iowa Northern locomotive, contracted by Goodloe Transportation of Chicago, departs from the Rotation, Processing and Surge Facility (RPSF) at NASA’s Kennedy Space Center in Florida, with two containers on railcars for transport to the NASA Jay Jay railroad yard. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the RPSF. Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.
Automatic CT Brain Image Segmentation Using Two Level Multiresolution Mixture Model of EM
NASA Astrophysics Data System (ADS)
Jiji, G. Wiselin; Dehmeshki, Jamshid
2014-04-01
Tissue classification in computed tomography (CT) brain images is an important issue in the analysis of several brain dementias. A combination of different approaches for the segmentation of brain images is presented in this paper. A multi resolution algorithm is proposed along with scaled versions using Gaussian filter and wavelet analysis that extends expectation maximization (EM) algorithm. It is found that it is less sensitive to noise and got more accurate image segmentation than traditional EM. Moreover the algorithm has been applied on 20 sets of CT of the human brain and compared with other works. The segmentation results show the advantages of the proposed work have achieved more promising results and the results have been tested with Doctors.
Lu, Hongwei; Zhang, Chenxi; Sun, Ying; Hao, Zhidong; Wang, Chunfang; Tian, Jiajia
2015-08-01
Predicting the termination of paroxysmal atrial fibrillation (AF) may provide a signal to decide whether there is a need to intervene the AF timely. We proposed a novel RdR RR intervals scatter plot in our study. The abscissa of the RdR scatter plot was set to RR intervals and the ordinate was set as the difference between successive RR intervals. The RdR scatter plot includes information of RR intervals and difference between successive RR intervals, which captures more heart rate variability (HRV) information. By RdR scatter plot analysis of one minute RR intervals for 50 segments with non-terminating AF and immediately terminating AF, it was found that the points in RdR scatter plot of non-terminating AF were more decentralized than the ones of immediately terminating AF. By dividing the RdR scatter plot into uniform grids and counting the number of non-empty grids, non-terminating AF and immediately terminating AF segments were differentiated. By utilizing 49 RR intervals, for 20 segments of learning set, 17 segments were correctly detected, and for 30 segments of test set, 20 segments were detected. While utilizing 66 RR intervals, for 18 segments of learning set, 16 segments were correctly detected, and for 28 segments of test set, 20 segments were detected. The results demonstrated that during the last one minute before the termination of paroxysmal AF, the variance of the RR intervals and the difference of the neighboring two RR intervals became smaller. The termination of paroxysmal AF could be successfully predicted by utilizing the RdR scatter plot, while the predicting accuracy should be further improved.
Hoyng, Lieke L; Frings, Virginie; Hoekstra, Otto S; Kenny, Laura M; Aboagye, Eric O; Boellaard, Ronald
2015-01-01
Positron emission tomography (PET) with (18)F-3'-deoxy-3'-fluorothymidine ([(18)F]FLT) can be used to assess tumour proliferation. A kinetic-filtering (KF) classification algorithm has been suggested for segmentation of tumours in dynamic [(18)F]FLT PET data. The aim of the present study was to evaluate KF segmentation and its test-retest performance in [(18)F]FLT PET in non-small cell lung cancer (NSCLC) patients. Nine NSCLC patients underwent two 60-min dynamic [(18)F]FLT PET scans within 7 days prior to treatment. Dynamic scans were reconstructed with filtered back projection (FBP) as well as with ordered subsets expectation maximisation (OSEM). Twenty-eight lesions were identified by an experienced physician. Segmentation was performed using KF applied to the dynamic data set and a source-to-background corrected 50% threshold (A50%) was applied to the sum image of the last three frames (45- to 60-min p.i.). Furthermore, several adaptations of KF were tested. Both for KF and A50% test-retest (TRT) variability of metabolically active tumour volume and standard uptake value (SUV) were evaluated. KF performed better on OSEM- than on FBP-reconstructed PET images. The original KF implementation segmented 15 out of 28 lesions, whereas A50% segmented each lesion. Adapted KF versions, however, were able to segment 26 out of 28 lesions. In the best performing adapted versions, metabolically active tumour volume and SUV TRT variability was similar to those of A50%. KF misclassified certain tumour areas as vertebrae or liver tissue, which was shown to be related to heterogeneous [(18)F]FLT uptake areas within the tumour. For [(18)F]FLT PET studies in NSCLC patients, KF and A50% show comparable tumour volume segmentation performance. The KF method needs, however, a site-specific optimisation. The A50% is therefore a good alternative for tumour segmentation in NSCLC [(18)F]FLT PET studies in multicentre studies. Yet, it was observed that KF has the potential to subsegment lesions in high and low proliferative areas.
CALIOP Version 3 Data Products: A Comparison to Version 2
NASA Technical Reports Server (NTRS)
Vaughan, Mark; Omar, Ali; Hunt, Bill; Getzewich, Brian; Tackett, Jason; Powell, Kathy; Avery, Melody; Kuehn, Ralph; Young, Stuart; Hu, Yong;
2010-01-01
After launch we discovered that the CALIOP daytime measurements were subject to thermally induced beamdrift,and this caused the calibration to vary by as much as 30% during the course of a single daytime orbit segment. Using an algorithm developed by Powell et al.(2010), empirically derived correction factors are now computed in near realtime as a function of orbit elapsed time, and these are used to compensate for the beam wandering effects.
Jiang, Zhou; Jin, Peizhen; Mishra, Nishikant; Song, Malin
2017-09-01
The problems with China's regional industrial overcapacity are often influenced by local governments. This study constructs a framework that includes the resource and environmental costs to analyze overcapacity using the non-radial direction distance function and the price method to measure industrial capacity utilization and market segmentation in 29 provinces in China from 2002 to 2014. The empirical analysis of the spatial panel econometric model shows that (1) the industrial capacity utilization in China's provinces has a ladder-type distribution with a gradual decrease from east to west and there is a severe overcapacity in the traditional heavy industry areas; (2) local government intervention has serious negative effects on regional industry utilization and factor market segmentation more significantly inhibits the utilization rate of regional industry than commodity market segmentation; (3) economic openness improves the utilization rate of industrial capacity while the internet penetration rate and regional environmental management investment have no significant impact; and(4) a higher degree of openness and active private economic development have a positive spatial spillover effect, while there is a significant negative spatial spillover effect from local government intervention and industrial structure sophistication. This paper includes the impact of resources and the environment in overcapacity evaluations, which should guide sustainable development in emerging economies.
Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R
2013-01-01
Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation of 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54±0.75 mm prior to refinement vs. 1.11±0.43 mm post-refinement, p≪0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction was about 2 min per case. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. Copyright © 2013 Elsevier Ltd. All rights reserved.
A Review of Large Solid Rocket Motor Free Field Acoustics, Part I
NASA Technical Reports Server (NTRS)
Pilkey, Debbie; Kenny, Robert Jeremy
2011-01-01
At the ATK facility in Utah, large full scale solid rocket motors are tested. The largest is a five segment version of the Reusable Solid Rocket Motor, which is for use on future launch vehicles. Since 2006, Acoustic measurements have been taken on large solid rocket motors at ATK. Both the four segment RSRM and the five segment RSRMV have been instrumented. Measurements are used to update acoustic prediction models and to correlate against vibration responses of the motor. Presentation focuses on two major sections: Part I) Unique challenges associated with measuring rocket acoustics Part II) Acoustic measurements summary over past five years
NASA Technical Reports Server (NTRS)
Davis, J. E.; Bonnett, W. S.; Medan, R. T.
1976-01-01
A computer program known as SOLN was developed as an independent segment of the NASA-Ames three-dimensional potential flow analysis systems of linear algebraic equations. Methods used include: LU decomposition, Householder's method, a partitioning scheme, and a block successive relaxation method. Due to the independent modular nature of the program, it may be used by itself and not necessarily in conjunction with other segments of the POTFAN system.
Lee, Haofu; Nguyen, Alan; Hong, Christine; Hoang, Paul; Pham, John; Ting, Kang
2017-01-01
Introduction The aims of this study were to evaluate the effects of rapid palatal expansion on the craniofacial skeleton of a patient with unilateral cleft lip and palate (UCLP) and to predict the points of force application for optimal expansion using a 3-dimensional finite element model. Methods A 3-dimensional finite element model of the craniofacial complex with UCLP was generated from spiral computed tomographic scans with imaging software (Mimics, version 13.1; Materialise, Leuven, Belgium). This model was imported into the finite element solver (version 12.0; ANSYS, Canonsburg, Pa) to evaluate transverse expansion forces from rapid palatal expansion. Finite element analysis was performed with transverse expansion to achieve 5 mm of anterolateral expansion of the collapsed minor segment to simulate correction of the anterior crossbite in a patient with UCLP. Results High-stress concentrations were observed at the body of the sphenoid, medial to the orbit, and at the inferior area of the zygomatic process of the maxilla. The craniofacial stress distribution was asymmetric, with higher stress levels on the cleft side. When forces were applied more anteriorly on the collapsed minor segment and more posteriorly on the major segment, there was greater expansion of the anterior region of the minor segment with minimal expansion of the major segment. Conclusions The transverse expansion forces from rapid palatal expansion are distributed to the 3 maxillary buttresses. Finite element analysis is an appropriate tool to study and predict the points of force application for better controlled expansion in patients with UCLP. PMID:27476365
Lee, Haofu; Nguyen, Alan; Hong, Christine; Hoang, Paul; Pham, John; Ting, Kang
2016-08-01
The aims of this study were to evaluate the effects of rapid palatal expansion on the craniofacial skeleton of a patient with unilateral cleft lip and palate (UCLP) and to predict the points of force application for optimal expansion using a 3-dimensional finite element model. A 3-dimensional finite element model of the craniofacial complex with UCLP was generated from spiral computed tomographic scans with imaging software (Mimics, version 13.1; Materialise, Leuven, Belgium). This model was imported into the finite element solver (version 12.0; ANSYS, Canonsburg, Pa) to evaluate transverse expansion forces from rapid palatal expansion. Finite element analysis was performed with transverse expansion to achieve 5 mm of anterolateral expansion of the collapsed minor segment to simulate correction of the anterior crossbite in a patient with UCLP. High-stress concentrations were observed at the body of the sphenoid, medial to the orbit, and at the inferior area of the zygomatic process of the maxilla. The craniofacial stress distribution was asymmetric, with higher stress levels on the cleft side. When forces were applied more anteriorly on the collapsed minor segment and more posteriorly on the major segment, there was greater expansion of the anterior region of the minor segment with minimal expansion of the major segment. The transverse expansion forces from rapid palatal expansion are distributed to the 3 maxillary buttresses. Finite element analysis is an appropriate tool to study and predict the points of force application for better controlled expansion in patients with UCLP. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Panda, Rashmi; Puhan, N B; Panda, Ganapati
2018-02-01
Accurate optic disc (OD) segmentation is an important step in obtaining cup-to-disc ratio-based glaucoma screening using fundus imaging. It is a challenging task because of the subtle OD boundary, blood vessel occlusion and intensity inhomogeneity. In this Letter, the authors propose an improved version of the random walk algorithm for OD segmentation to tackle such challenges. The algorithm incorporates the mean curvature and Gabor texture energy features to define the new composite weight function to compute the edge weights. Unlike the deformable model-based OD segmentation techniques, the proposed algorithm remains unaffected by curve initialisation and local energy minima problem. The effectiveness of the proposed method is verified with DRIVE, DIARETDB1, DRISHTI-GS and MESSIDOR database images using the performance measures such as mean absolute distance, overlapping ratio, dice coefficient, sensitivity, specificity and precision. The obtained OD segmentation results and quantitative performance measures show robustness and superiority of the proposed algorithm in handling the complex challenges in OD segmentation.
SLS Pathfinder Segments Car Train Departure
2016-03-02
An Iowa Northern locomotive, contracted by Goodloe Transportation of Chicago, travels along the NASA railroad bridge over the Indian River north of Kennedy Space Center, carrying one of two containers on a railcar for transport to the NASA Jay Jay railroad yard. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the Rotation, Processing and Surge Facility (RPSF). Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.
SLS Pathfinder Segments Car Train Departure
2016-03-02
An Iowa Northern locomotive, conracted by Goodloe Transportation of Chicago, travels along the NASA railroad bridge over the Indian River north of Kennedy Space Center, with two containers on railcars for transport to the NASA Jay Jay railroad yard. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the Rotation, Processing and Surge Facility (RPSF). Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.
SLS Pathfinder Segments Car Train Departure
2016-03-02
An Iowa Northern locomotive, contracted by Goodloe Transportation of Chicago, approaches the raised span of the NASA railroad bridge to continue over the Indian River north of Kennedy Space Center with two containers on railcars for storage at the NASA Jay Jay railroad yard. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the Rotation, Processing and Surge Facility (RPSF). Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.
SLS Pathfinder Segments Car Train Departure
2016-03-02
An Iowa Northern locomotive, contracted by Goodloe Transportation of Chicago, travels along the NASA railroad bridge over the Indian River north of Kennedy Space Center, carrying one of two containers on a railcar for transport to the NASA Jay Jay railroad yard near the center. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the Rotation, Processing and Surge Facility (RPSF). Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.
SLS Pathfinder Segments Car Train Departure
2016-03-02
An Iowa Northern locomotive, contracted by Goodloe Transportation of Chicago, continues along the NASA railroad bridge over the Indian River north of Kennedy Space Center, carrying one of two containers on a railcar for transport to the NASA Jay Jay railroad yard. The containers held two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket that were delivered to the Rotation, Processing and Surge Facility (RPSF). Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will conduct a series of lifts, moves and stacking operations using the booster segments, which are inert, to prepare for Exploration Mission-1, deep-space missions and the journey to Mars. The pathfinder booster segments are from Orbital ATK in Utah.
Reverberation negatively impacts musical sound quality for cochlear implant users.
Roy, Alexis T; Vigeant, Michelle; Munjal, Tina; Carver, Courtney; Jiradejvong, Patpong; Limb, Charles J
2015-09-01
Satisfactory musical sound quality remains a challenge for many cochlear implant (CI) users. In particular, questionnaires completed by CI users suggest that reverberation due to room acoustics can negatively impact their music listening experience. The objective of this study was to more specifically characterize of the effect of reverberation on musical sound quality in CI users, normal hearing (NH) non-musicians, and NH musicians using a previously designed assessment method, called Cochlear Implant-MUltiple Stimulus with Hidden Reference and Anchor (CI-MUSHRA). In this method, listeners were randomly presented with an anechoic musical segment and five-versions of this segment in which increasing amounts of reverberation were artificially added. Participants listened to the six reverberation versions and provided sound quality ratings between 0 (very poor) and 100 (excellent). Results demonstrated that on average CI users and NH non-musicians preferred the sound quality of anechoic versions to more reverberant versions. In comparison, NH musicians could be delineated into those who preferred the sound quality of anechoic pieces and those who preferred pieces with some reverberation. This is the first study, to our knowledge, to objectively compare the effects of reverberation on musical sound quality ratings in CI users. These results suggest that musical sound quality for CI users can be improved by non-reverberant listening conditions and musical stimuli in which reverberation is removed.
Fast Automatic Segmentation of White Matter Streamlines Based on a Multi-Subject Bundle Atlas.
Labra, Nicole; Guevara, Pamela; Duclap, Delphine; Houenou, Josselin; Poupon, Cyril; Mangin, Jean-François; Figueroa, Miguel
2017-01-01
This paper presents an algorithm for fast segmentation of white matter bundles from massive dMRI tractography datasets using a multisubject atlas. We use a distance metric to compare streamlines in a subject dataset to labeled centroids in the atlas, and label them using a per-bundle configurable threshold. In order to reduce segmentation time, the algorithm first preprocesses the data using a simplified distance metric to rapidly discard candidate streamlines in multiple stages, while guaranteeing that no false negatives are produced. The smaller set of remaining streamlines is then segmented using the original metric, thus eliminating any false positives from the preprocessing stage. As a result, a single-thread implementation of the algorithm can segment a dataset of almost 9 million streamlines in less than 6 minutes. Moreover, parallel versions of our algorithm for multicore processors and graphics processing units further reduce the segmentation time to less than 22 seconds and to 5 seconds, respectively. This performance enables the use of the algorithm in truly interactive applications for visualization, analysis, and segmentation of large white matter tractography datasets.
STATE ACID RAIN RESEARCH AND SCREENING SYSTEM - VERSION 1.0 USER'S MANUAL
The report is a user's manual that describes Version 1.0 of EPA's STate Acid Rain Research and Screening System (STARRSS), developed to assist utility regulatory commissions in reviewing utility acid rain compliance plans. It is a screening tool that is based on scenario analysis...
Joshi, Vinayak S; Reinhardt, Joseph M; Garvin, Mona K; Abramoff, Michael D
2014-01-01
The separation of the retinal vessel network into distinct arterial and venous vessel trees is of high interest. We propose an automated method for identification and separation of retinal vessel trees in a retinal color image by converting a vessel segmentation image into a vessel segment map and identifying the individual vessel trees by graph search. Orientation, width, and intensity of each vessel segment are utilized to find the optimal graph of vessel segments. The separated vessel trees are labeled as primary vessel or branches. We utilize the separated vessel trees for arterial-venous (AV) classification, based on the color properties of the vessels in each tree graph. We applied our approach to a dataset of 50 fundus images from 50 subjects. The proposed method resulted in an accuracy of 91.44% correctly classified vessel pixels as either artery or vein. The accuracy of correctly classified major vessel segments was 96.42%.
Cao, Haifeng; Zhang, Jingxu; Yang, Fei; An, Qichang; Zhao, Hongchao; Guo, Peng
2018-05-01
The Thirty Meter Telescope (TMT) project will design and build a 30-m-diameter telescope for research in astronomy in visible and infrared wavelengths. The primary mirror of TMT is made up of 492 hexagonal mirror segments under active control. The highly segmented primary mirror will utilize edge sensors to align and stabilize the relative piston, tip, and tilt degrees of segments. The support system assembly (SSA) of the segmented mirror utilizes a guide flexure to decouple the axial support and lateral support, while its deformation will cause measurement error of the edge sensor. We have analyzed the theoretical relationship between the segment movement and the measurement value of the edge sensor. Further, we have proposed an error correction method with a matrix. The correction process and the simulation results of the edge sensor will be described in this paper.
Market segmentation using perceived constraints
Jinhee Jun; Gerard Kyle; Andrew Mowen
2008-01-01
We examined the practical utility of segmenting potential visitors to Cleveland Metroparks using their constraint profiles. Our analysis identified three segments based on their scores on the dimensions of constraints: Other priorities--visitors who scored the highest on 'other priorities' dimension; Highly Constrained--visitors who scored relatively high on...
Clustering approach for unsupervised segmentation of malarial Plasmodium vivax parasite
NASA Astrophysics Data System (ADS)
Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Mohamed, Zeehaida
2017-10-01
Malaria is a global health problem, particularly in Africa and south Asia where it causes countless deaths and morbidity cases. Efficient control and prompt of this disease require early detection and accurate diagnosis due to the large number of cases reported yearly. To achieve this aim, this paper proposes an image segmentation approach via unsupervised pixel segmentation of malaria parasite to automate the diagnosis of malaria. In this study, a modified clustering algorithm namely enhanced k-means (EKM) clustering, is proposed for malaria image segmentation. In the proposed EKM clustering, the concept of variance and a new version of transferring process for clustered members are used to assist the assignation of data to the proper centre during the process of clustering, so that good segmented malaria image can be generated. The effectiveness of the proposed EKM clustering has been analyzed qualitatively and quantitatively by comparing this algorithm with two popular image segmentation techniques namely Otsu's thresholding and k-means clustering. The experimental results show that the proposed EKM clustering has successfully segmented 100 malaria images of P. vivax species with segmentation accuracy, sensitivity and specificity of 99.20%, 87.53% and 99.58%, respectively. Hence, the proposed EKM clustering can be considered as an image segmentation tool for segmenting the malaria images.
Generation of Fullspan Leading-Edge 3D Ice Shapes for Swept-Wing Aerodynamic Testing
NASA Technical Reports Server (NTRS)
Camello, Stephanie C.; Lee, Sam; Lum, Christopher; Bragg, Michael B.
2016-01-01
The deleterious effect of ice accretion on aircraft is often assessed through dry-air flight and wind tunnel testing with artificial ice shapes. This paper describes a method to create fullspan swept-wing artificial ice shapes from partial span ice segments acquired in the NASA Glenn Icing Reserch Tunnel for aerodynamic wind-tunnel testing. Full-scale ice accretion segments were laser scanned from the Inboard, Midspan, and Outboard wing station models of the 65% scale Common Research Model (CRM65) aircraft configuration. These were interpolated and extrapolated using a weighted averaging method to generate fullspan ice shapes from the root to the tip of the CRM65 wing. The results showed that this interpolation method was able to preserve many of the highly three dimensional features typically found on swept-wing ice accretions. The interpolated fullspan ice shapes were then scaled to fit the leading edge of a 8.9% scale version of the CRM65 wing for aerodynamic wind-tunnel testing. Reduced fidelity versions of the fullspan ice shapes were also created where most of the local three-dimensional features were removed. The fullspan artificial ice shapes and the reduced fidelity versions were manufactured using stereolithography.
Identification of everyday objects on the basis of Gaborized outline versions
Sassi, Michaël; Vancleef, Kathleen; Machilsen, Bart; Panis, Sven; Wagemans, Johan
2010-01-01
Using outlines derived from a widely used set of line drawings, we created stimuli geared towards the investigation of contour integration and texture segmentation using shapes of everyday objects. Each stimulus consisted of Gabor elements positioned and oriented curvilinearly along the outline of an object, embedded within a larger Gabor array of homogeneous density. We created six versions of the resulting Gaborized outline stimuli by varying the orientations of elements inside and outside the outline. Data from two experiments, in which participants attempted to identify the objects in the stimuli, provide norms for identifiability and name agreement, and show differences in identifiability between stimulus versions. While there was substantial variability between the individual objects in our stimulus set, further analyses suggest a number of stimulus properties which are generally predictive of identification performance. The stimuli and the accompanying normative data, both available on our website (http://www.gestaltrevision.be/sources/gaboroutlines), provide a useful tool to further investigate contour integration and texture segmentation in both normal and clinical populations, especially when top-down influences on these processes, such as the role of prior knowledge of familiar objects, are of main interest. PMID:23145218
Identification of everyday objects on the basis of Gaborized outline versions.
Sassi, Michaël; Vancleef, Kathleen; Machilsen, Bart; Panis, Sven; Wagemans, Johan
2010-01-01
Using outlines derived from a widely used set of line drawings, we created stimuli geared towards the investigation of contour integration and texture segmentation using shapes of everyday objects. Each stimulus consisted of Gabor elements positioned and oriented curvilinearly along the outline of an object, embedded within a larger Gabor array of homogeneous density. We created six versions of the resulting Gaborized outline stimuli by varying the orientations of elements inside and outside the outline. Data from two experiments, in which participants attempted to identify the objects in the stimuli, provide norms for identifiability and name agreement, and show differences in identifiability between stimulus versions. While there was substantial variability between the individual objects in our stimulus set, further analyses suggest a number of stimulus properties which are generally predictive of identification performance. The stimuli and the accompanying normative data, both available on our website (http://www.gestaltrevision.be/sources/gaboroutlines), provide a useful tool to further investigate contour integration and texture segmentation in both normal and clinical populations, especially when top-down influences on these processes, such as the role of prior knowledge of familiar objects, are of main interest.
3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion
Dou, Qingxu; Wei, Lijun; Magee, Derek R.; Atkins, Phil R.; Chapman, David N.; Curioni, Giulio; Goddard, Kevin F.; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R.; Rustighi, Emiliano; Swingler, Steven G.; Rogers, Christopher D. F.; Cohn, Anthony G.
2016-01-01
We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed “multi-utility multi-sensor” system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation. PMID:27827836
3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion.
Dou, Qingxu; Wei, Lijun; Magee, Derek R; Atkins, Phil R; Chapman, David N; Curioni, Giulio; Goddard, Kevin F; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R; Rustighi, Emiliano; Swingler, Steven G; Rogers, Christopher D F; Cohn, Anthony G
2016-11-02
We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed "multi-utility multi-sensor" system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation.
Shopping Effort Classification: Implications for Segmenting the College Student Market
ERIC Educational Resources Information Center
Wright, Robert E.; Palmer, John C.; Eidson, Vicky; Griswold, Melissa
2011-01-01
Market segmentation strategies based on levels of consumer shopping effort have long been utilized by marketing professionals. Such strategies can be beneficial in assisting marketers with development of appropriate marketing mix variables for segments. However, these types of strategies have not been assessed by researchers examining segmentation…
SRB Processing Facilities Media Event
2016-03-01
Members of the news media view the high bay inside the Rotation, Processing and Surge Facility (RPSF) at NASA’s Kennedy Space Center in Florida. Kerry Chreist, with Jacobs Engineering on the Test and Operations Support Contract, talks with a reporter about the booster segments for NASA’s Space Launch System (SLS) rocket. In the far corner, in the vertical position, is one of two pathfinders, or test versions, of solid rocket booster segments for the SLS rocket. The Ground Systems Development and Operations Program and Jacobs are preparing the booster segments, which are inert, for a series of lifts, moves and stacking operations to prepare for Exploration Mission-1, deep-space missions and the journey to Mars.
SRB Processing Facilities Media Event
2016-03-01
Members of the news media watch as two cranes are used to lift one of two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System (SLS) rocket into the vertical position inside the Rotation, Processing and Surge Facility at NASA’s Kennedy Space Center in Florida. The pathfinder booster segment will be moved to the other end of the RPSF and secured on a test stand. The Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will prepare the booster segments, which are inert, for a series of lifts, moves and stacking operations to prepare for Exploration Mission-1, deep-space missions and the journey to Mars.
Updated System-Availability and Resource-Allocation Program
NASA Technical Reports Server (NTRS)
Viterna, Larry
2004-01-01
A second version of the Availability, Cost and Resource Allocation (ACARA) computer program has become available. The first version was reported in an earlier tech brief. To recapitulate: ACARA analyzes the availability, mean-time-between-failures of components, life-cycle costs, and scheduling of resources of a complex system of equipment. ACARA uses a statistical Monte Carlo method to simulate the failure and repair of components while complying with user-specified constraints on spare parts and resources. ACARA evaluates the performance of the system on the basis of a mathematical model developed from a block-diagram representation. The previous version utilized the MS-DOS operating system and could not be run by use of the most recent versions of the Windows operating system. The current version incorporates the algorithms of the previous version but is compatible with Windows and utilizes menus and a file-management approach typical of Windows-based software.
Implementation of a General Real-Time Visual Anomaly Detection System Via Soft Computing
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A.; Klinko, Steve; Ferrell, Bob; Steinrock, Todd (Technical Monitor)
2001-01-01
The intelligent visual system detects anomalies or defects in real time under normal lighting operating conditions. The application is basically a learning machine that integrates fuzzy logic (FL), artificial neural network (ANN), and generic algorithm (GA) schemes to process the image, run the learning process, and finally detect the anomalies or defects. The system acquires the image, performs segmentation to separate the object being tested from the background, preprocesses the image using fuzzy reasoning, performs the final segmentation using fuzzy reasoning techniques to retrieve regions with potential anomalies or defects, and finally retrieves them using a learning model built via ANN and GA techniques. FL provides a powerful framework for knowledge representation and overcomes uncertainty and vagueness typically found in image analysis. ANN provides learning capabilities, and GA leads to robust learning results. An application prototype currently runs on a regular PC under Windows NT, and preliminary work has been performed to build an embedded version with multiple image processors. The application prototype is being tested at the Kennedy Space Center (KSC), Florida, to visually detect anomalies along slide basket cables utilized by the astronauts to evacuate the NASA Shuttle launch pad in an emergency. The potential applications of this anomaly detection system in an open environment are quite wide. Another current, potentially viable application at NASA is in detecting anomalies of the NASA Space Shuttle Orbiter's radiator panels.
NASA Astrophysics Data System (ADS)
Hashemi, Sayed Masoud; Lee, Young; Eriksson, Markus; Nordström, Hâkan; Mainprize, James; Grouza, Vladimir; Huynh, Christopher; Sahgal, Arjun; Song, William Y.; Ruschin, Mark
2017-03-01
A Contrast and Attenuation-map (CT-number) Linearity Improvement (CALI) framework is proposed for cone-beam CT (CBCT) images used for brain stereotactic radiosurgery (SRS). The proposed framework is used together with our high spatial resolution iterative reconstruction algorithm and is tailored for the Leksell Gamma Knife ICON (Elekta, Stockholm, Sweden). The incorporated CBCT system in ICON facilitates frameless SRS planning and treatment delivery. The ICON employs a half-cone geometry to accommodate the existing treatment couch. This geometry increases the amount of artifacts and together with other physical imperfections causes image inhomogeneity and contrast reduction. Our proposed framework includes a preprocessing step, involving a shading and beam-hardening artifact correction, and a post-processing step to correct the dome/capping artifact caused by the spatial variations in x-ray energy generated by bowtie-filter. Our shading correction algorithm relies solely on the acquired projection images (i.e. no prior information required) and utilizes filtered-back-projection (FBP) reconstructed images to generate a segmented bone and soft-tissue map. Ideal projections are estimated from the segmented images and a smoothed version of the difference between the ideal and measured projections is used in correction. The proposed beam-hardening and dome artifact corrections are segmentation free. The CALI was tested on CatPhan, as well as patient images acquired on the ICON system. The resulting clinical brain images show substantial improvements in soft contrast visibility, revealing structures such as ventricles and lesions which were otherwise un-detectable in FBP-reconstructed images. The linearity of the reconstructed attenuation-map was also improved, resulting in more accurate CT#.
GPU-based relative fuzzy connectedness image segmentation.
Zhuge, Ying; Ciesielski, Krzysztof C; Udupa, Jayaram K; Miller, Robert W
2013-01-01
Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. The most common FC segmentations, optimizing an [script-l](∞)-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.
GPU-based relative fuzzy connectedness image segmentation
Zhuge, Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.
2013-01-01
Purpose: Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ∞-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology. PMID:23298094
Grunert, Klaus G; Perrea, Toula; Zhou, Yanfeng; Huang, Guang; Sørensen, Bjarne T; Krystallis, Athanasios
2011-04-01
Research related to food-related behaviour in China is still scarce, one reason being the fact that food consumption patterns in East Asia do not appear to be easily analyzed by models originating in Western cultures. The objective of the present work is to examine the ability of the food related lifestyle (FRL) instrument to reveal food consumption patterns in a Chinese context. Data were collected from 479 respondents in 6 major Chinese cities using a Chinese version of the FRL instrument. Analysis of reliability and dimensionality of the scales resulted in a revised version of the instrument, in which a number of dimensions of the original instrument had to be omitted. This revised instrument was tested for statistical robustness and used as a basis for the derivation of consumer segments. Construct validity of the instrument was then investigated by profiling the segments in terms of consumer values, attitudes and purchase behaviour, using frequency of consumption of pork products as an example. Three consumer segments were identified: concerned, uninvolved and traditional. This pattern replicates partly those identified in Western cultures. Moreover, all three segments showed consistent value-attitude-behaviour profiles. The results also suggest which dimensions may be missing in the instrument in a more comprehensive instrument adapted to Chinese conditions, most notably a broader treatment of eating out activities. Copyright © 2010 Elsevier Ltd. All rights reserved.
Clustering-based spot segmentation of cDNA microarray images.
Uslan, Volkan; Bucak, Ihsan Ömür
2010-01-01
Microarrays are utilized as that they provide useful information about thousands of gene expressions simultaneously. In this study segmentation step of microarray image processing has been implemented. Clustering-based methods, fuzzy c-means and k-means, have been applied for the segmentation step that separates the spots from the background. The experiments show that fuzzy c-means have segmented spots of the microarray image more accurately than the k-means.
Automatic liver segmentation on Computed Tomography using random walkers for treatment planning
Moghbel, Mehrdad; Mashohor, Syamsiah; Mahmud, Rozi; Saripan, M. Iqbal Bin
2016-01-01
Segmentation of the liver from Computed Tomography (CT) volumes plays an important role during the choice of treatment strategies for liver diseases. Despite lots of attention, liver segmentation remains a challenging task due to the lack of visible edges on most boundaries of the liver coupled with high variability of both intensity patterns and anatomical appearances with all these difficulties becoming more prominent in pathological livers. To achieve a more accurate segmentation, a random walker based framework is proposed that can segment contrast-enhanced livers CT images with great accuracy and speed. Based on the location of the right lung lobe, the liver dome is automatically detected thus eliminating the need for manual initialization. The computational requirements are further minimized utilizing rib-caged area segmentation, the liver is then extracted by utilizing random walker method. The proposed method was able to achieve one of the highest accuracies reported in the literature against a mixed healthy and pathological liver dataset compared to other segmentation methods with an overlap error of 4.47 % and dice similarity coefficient of 0.94 while it showed exceptional accuracy on segmenting the pathological livers with an overlap error of 5.95 % and dice similarity coefficient of 0.91. PMID:28096782
- and Graph-Based Point Cloud Segmentation of 3d Scenes Using Perceptual Grouping Laws
NASA Astrophysics Data System (ADS)
Xu, Y.; Hoegner, L.; Tuttas, S.; Stilla, U.
2017-05-01
Segmentation is the fundamental step for recognizing and extracting objects from point clouds of 3D scene. In this paper, we present a strategy for point cloud segmentation using voxel structure and graph-based clustering with perceptual grouping laws, which allows a learning-free and completely automatic but parametric solution for segmenting 3D point cloud. To speak precisely, two segmentation methods utilizing voxel and supervoxel structures are reported and tested. The voxel-based data structure can increase efficiency and robustness of the segmentation process, suppressing the negative effect of noise, outliers, and uneven points densities. The clustering of voxels and supervoxel is carried out using graph theory on the basis of the local contextual information, which commonly conducted utilizing merely pairwise information in conventional clustering algorithms. By the use of perceptual laws, our method conducts the segmentation in a pure geometric way avoiding the use of RGB color and intensity information, so that it can be applied to more general applications. Experiments using different datasets have demonstrated that our proposed methods can achieve good results, especially for complex scenes and nonplanar surfaces of objects. Quantitative comparisons between our methods and other representative segmentation methods also confirms the effectiveness and efficiency of our proposals.
Efficient Algorithms for Segmentation of Item-Set Time Series
NASA Astrophysics Data System (ADS)
Chundi, Parvathi; Rosenkrantz, Daniel J.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.
A deconstruction of the I-M-L commitment segmentation of forest recreationists
James D. Absher; Gerard T. Kyle
2007-01-01
Previous work has established the general utility of segmenting forest recreationists according to their commitment profiles into Indifferents, Moderates, and Loyalists (IML) groups. Observed differences between these segments suggest that place identity and affect are more central to management than previously thought. This study extends this finding through the use...
Enhancing MPLS Protection Method with Adaptive Segment Repair
NASA Astrophysics Data System (ADS)
Chen, Chin-Ling
We propose a novel adaptive segment repair mechanism to improve traditional MPLS (Multi-Protocol Label Switching) failure recovery. The proposed mechanism protects one or more contiguous high failure probability links by dynamic setup of segment protection. Simulations demonstrate that the proposed mechanism reduces failure recovery time while also increasing network resource utilization.
Segmentation and object-oriented processing of single-season and multi-season Landsat-7 ETM+ data was utilized for the classification of wetlands in a 1560 km2 study area of north central Florida. This segmentation and object-oriented classification outperformed the traditional ...
Multi-atlas pancreas segmentation: Atlas selection based on vessel structure.
Karasawa, Ken'ichi; Oda, Masahiro; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Chu, Chengwen; Zheng, Guoyan; Rueckert, Daniel; Mori, Kensaku
2017-07-01
Automated organ segmentation from medical images is an indispensable component for clinical applications such as computer-aided diagnosis (CAD) and computer-assisted surgery (CAS). We utilize a multi-atlas segmentation scheme, which has recently been used in different approaches in the literature to achieve more accurate and robust segmentation of anatomical structures in computed tomography (CT) volume data. Among abdominal organs, the pancreas has large inter-patient variability in its position, size and shape. Moreover, the CT intensity of the pancreas closely resembles adjacent tissues, rendering its segmentation a challenging task. Due to this, conventional intensity-based atlas selection for pancreas segmentation often fails to select atlases that are similar in pancreas position and shape to those of the unlabeled target volume. In this paper, we propose a new atlas selection strategy based on vessel structure around the pancreatic tissue and demonstrate its application to a multi-atlas pancreas segmentation. Our method utilizes vessel structure around the pancreas to select atlases with high pancreatic resemblance to the unlabeled volume. Also, we investigate two types of applications of the vessel structure information to the atlas selection. Our segmentations were evaluated on 150 abdominal contrast-enhanced CT volumes. The experimental results showed that our approach can segment the pancreas with an average Jaccard index of 66.3% and an average Dice overlap coefficient of 78.5%. Copyright © 2017 Elsevier B.V. All rights reserved.
The tools of competition: Differentiation, segmentation and the microprocessor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepmeier, J.M.; Jermain, D.O.; Egnor, T.L.
1993-11-01
The microprocessor enables electric utilities to recover product differentiation and market segmentation tools that they relinquished decades ago. These tools present a [open quotes]double-edged[close quotes] opportunity to the industry. Product differentiation and market segmentation are deeply and permanently embedded in the corporate strategy and culture of virtually every successful firm. Most electric utilities, however, continue to promote a generic product to an undifferentiated captive audience. This approach was also common in the pre-Yeltsin USSR, where advertisements simply read, Buy Beer, or Eat Potatoes'. Electric utilities relinquished the differentiation/segmentation function in the far distant past to the suppliers of end-use energymore » appliances such as GE and Carrier. By default they assigned themselves the role of commodity supplier. Historically, this role has been protected in the marketplace and insulated from competition by two strong barriers: economies of scale and status as a legally franchised monopoly in a well-defined geographic territory. These two barriers do not exist independently; the second depends on the first. When scale economies cease and then reverse, the industry's legally protected position in the marketplace begins to erode. The lack of product differentiation and market segmentation, which was inconsequential before, now becomes a serious handicap: These same relinquished tools seem to be essential for success in a competitive environment.« less
de Klerk, Susan; Buchanan, Helen; Jerosch-Herold, Christina
Systematic review. The Disabilities of the Arm Shoulder and Hand Questionnaire has multiple language versions from many countries around the world. In addition there is extensive research evidence of its psychometric properties. The purpose of this study was to systematically review the evidence available on the validity and clinical utility of the Disabilities of the Arm Shoulder and Hand as a measure of activity and participation in patients with musculoskeletal hand injuries in developing country contexts. We registered the review with international prospective register of systematic reviews prior to conducting a comprehensive literature search and extracting descriptive data. Two reviewers independently assessed methodological quality with the Consensus-Based Standards for the Selection of Health Measurement Instruments critical appraisal tool, the checklist to operationalize measurement characteristics of patient-rated outcome measures and the multidimensional model of clinical utility. Fourteen studies reporting 12 language versions met the eligibility criteria. Two language versions (Persian and Turkish) had an overall rating of good, and one (Thai) had an overall rating of excellent for cross-cultural validity. The remaining 9 language versions had an overall poor rating for cross-cultural validity. Content and construct validity and clinical utility yielded similar results. Poor quality ratings for validity and clinical utility were due to insufficient documentation of results and inadequate psychometric testing. With the increase in migration and globalization, hand therapists are likely to require a range of culturally adapted and translated versions of the Disabilities of the Arm Shoulder and Hand. Recommendations include rigorous application and reporting of cross-cultural adaptation, appropriate psychometric testing, and testing of clinical utility in routine clinical practice. Copyright © 2017 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.
Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN).
Iqbal, Sajid; Ghani, M Usman; Saba, Tanzila; Rehman, Amjad
2018-04-01
A tumor could be found in any area of the brain and could be of any size, shape, and contrast. There may exist multiple tumors of different types in a human brain at the same time. Accurate tumor area segmentation is considered primary step for treatment of brain tumors. Deep Learning is a set of promising techniques that could provide better results as compared to nondeep learning techniques for segmenting timorous part inside a brain. This article presents a deep convolutional neural network (CNN) to segment brain tumors in MRIs. The proposed network uses BRATS segmentation challenge dataset which is composed of images obtained through four different modalities. Accordingly, we present an extended version of existing network to solve segmentation problem. The network architecture consists of multiple neural network layers connected in sequential order with the feeding of Convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data thus show the usability of the proposed approach and its superiority over the other approaches in this area of research. © 2018 Wiley Periodicals, Inc.
Kuzmina, Margarita; Manykin, Eduard; Surina, Irina
2004-01-01
An oscillatory network of columnar architecture located in 3D spatial lattice was recently designed by the authors as oscillatory model of the brain visual cortex. Single network oscillator is a relaxational neural oscillator with internal dynamics tunable by visual image characteristics - local brightness and elementary bar orientation. It is able to demonstrate either activity state (stable undamped oscillations) or "silence" (quickly damped oscillations). Self-organized nonlocal dynamical connections of oscillators depend on oscillator activity levels and orientations of cortical receptive fields. Network performance consists in transfer into a state of clusterized synchronization. At current stage grey-level image segmentation tasks are carried out by 2D oscillatory network, obtained as a limit version of the source model. Due to supplemented network coupling strength control the 2D reduced network provides synchronization-based image segmentation. New results on segmentation of brightness and texture images presented in the paper demonstrate accurate network performance and informative visualization of segmentation results, inherent in the model.
Aalaei, Shima; Rajabi Naraki, Zahra; Nematollahi, Fatemeh; Beyabanaki, Elaheh; Shahrokhi Rad, Afsaneh
2017-01-01
Background. Screw-retained restorations are favored in some clinical situations such as limited inter-occlusal spaces. This study was designed to compare stresses developed in the peri-implant bone in two different types of screw-retained restorations (segmented vs. non-segmented abutment) using a finite element model. Methods. An implant, 4.1 mm in diameter and 10 mm in length, was placed in the first molar site of a mandibular model with 1 mm of cortical bone on the buccal and lingual sides. Segmented and non-segmented screw abutments with their crowns were placed on the simulated implant in each model. After loading (100 N, axial and 45° non-axial), von Mises stress was recorded using ANSYS software, version 12.0.1. Results. The maximum stresses in the non-segmented abutment screw were less than those of segmented abutment (87 vs. 100, and 375 vs. 430 MPa under axial and non-axial loading, respectively). The maximum stresses in the peri-implant bone for the model with segmented abutment were less than those of non-segmented ones (21 vs. 24 MPa, and 31 vs. 126 MPa under vertical and angular loading, respectively). In addition, the micro-strain of peri-implant bone for the segmented abutment restoration was less than that of non-segmented abutment. Conclusion. Under axial and non-axial loadings, non-segmented abutment showed less stress concentration in the screw, while there was less stress and strain in the peri-implant bone in the segmented abutment. PMID:29184629
SRB Processing Facilities Media Event
2016-03-01
At the Rotation, Processing and Surge Facility (RPSF) at NASA’s Kennedy Space Center in Florida, members of the news media photograph the process as cranes are used to lift one of two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket. The Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, are preparing the booster segments, which are inert, for a series of lifts, moves and stacking operations to prepare for Exploration Mission-1, deep-space missions and the journey to Mars.
SRB Processing Facilities Media Event
2016-03-01
At the Rotation, Processing and Surge Facility (RPSF) at NASA’s Kennedy Space Center in Florida, members of the news media watch as cranes are used to lift one of two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket. The Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, are preparing the booster segments, which are inert, for a series of lifts, moves and stacking operations to prepare for Exploration Mission-1, deep-space missions and the journey to Mars.
Modified SSCP method using sequential electrophoresis of multiple nucleic acid segments
Gatti, Richard A.
2002-10-01
The present invention is directed to a method of screening large, complex, polyexonic eukaryotic genes such as the ATM gene for mutations and polymorphisms by an improved version of single strand conformation polymorphism (SSCP) electrophoresis that allows electrophoresis of two or three amplified segments in a single lane. The present invention also is directed to new mutations and polymorphisms in the ATM gene that are useful in performing more accurate screening of human DNA samples for mutations and in distinguishing mutations from polymorphisms, thereby improving the efficiency of automated screening methods.
SRB Processing Facilities Media Event
2016-03-01
Members of the news media view the high bay inside the Rotation, Processing and Surge Facility (RPSF) at NASA’s Kennedy Space Center in Florida. Kerry Chreist, with Jacobs Engineering on the Test and Operations Support Contract, explains the various test stands and how they will be used to prepare booster segments for NASA’s Space Launch System (SLS) rocket. In the far corner, in the vertical position, is one of two pathfinders, or test versions, of solid rocket booster segments for the SLS rocket. The Ground Systems Development and Operations Program and Jacobs are preparing the booster segments, which are inert, for a series of lifts, moves and stacking operations to prepare for Exploration Mission-1, deep-space missions and the journey to Mars.
Mathematical morphology for automated analysis of remotely sensed objects in radar images
NASA Technical Reports Server (NTRS)
Daida, Jason M.; Vesecky, John F.
1991-01-01
A symbiosis of pyramidal segmentation and morphological transmission is described. The pyramidal segmentation portion of the symbiosis has resulted in low (2.6 percent) misclassification error rate for a one-look simulation. Other simulations indicate lower error rates (1.8 percent for a four-look image). The morphological transformation portion has resulted in meaningful partitions with a minimal loss of fractal boundary information. An unpublished version of Thicken, suitable for watersheds transformations of fractal objects, is also presented. It is demonstrated that the proposed symbiosis works with SAR (synthetic aperture radar) images: in this case, a four-look Seasat image of sea ice. It is concluded that the symbiotic forms of both segmentation and morphological transformation seem well suited for unsupervised geophysical analysis.
Progressive segmented health insurance: Colombian health reform and access to health services.
Ruiz, Fernando; Amaya, Liliana; Venegas, Stella
2007-01-01
Equal access for poor populations to health services is a comprehensive objective for any health reform. The Colombian health reform addressed this issue through a segmented progressive social health insurance approach. The strategy was to assure universal coverage expanding the population covered through payroll linked insurance, and implementing a subsidized insurance program for the poorest populations, those not affiliated through formal employment. A prospective study was performed to follow-up health service utilization and out-of-pocket expenses using a cohort design. It was representative of four Colombian cities (Cendex Health Services Use and Expenditure Study, 2001). A four part econometric model was applied. The model related medical service utilization and medication with different socioeconomic, geographic, and risk associated variables. Results showed that subsidized health insurance improves health service utilization and reduces the financial burden for the poorest, as compared to those non-insured. Other social health insurance schemes preserved high utilization with variable out-of-pocket expenditures. Family and age conditions have significant effect on medical service utilization. Geographic variables play a significant role in hospital inpatient service utilization. Both, geographic and income variables also have significant impact on out-of-pocket expenses. Projected utilization rates and a simulation favor a dual policy for two-stage income segmented insurance to progress towards the universal insurance goal. Copyright (c) 2006 John Wiley & Sons, Ltd.
Lee, Chun Fan; Ng, Raymond; Luo, Nan; Wong, Nan Soon; Yap, Yoon Sim; Lo, Soo Kien; Chia, Whay Kuang; Yee, Alethea; Krishna, Lalit; Wong, Celest; Goh, Cynthia; Cheung, Yin Bun
2013-01-01
To examine the measurement properties of and comparability between the English and Chinese versions of the five-level EuroQoL Group's five-dimension questionnaire (EQ-5D) in breast cancer patients in Singapore. This is an observational study of 269 patients. Known-group validity and responsiveness of the EQ-5D utility index and visual analog scale (VAS) were assessed in relation to various clinical characteristics and longitudinal change in performance status, respectively. Convergent and divergent validity was examined by correlation coefficients between the EQ-5D and a breast cancer-specific instrument. Test-retest reliability was evaluated. The two language versions were compared by multiple regression analyses. For both English and Chinese versions, the EQ-5D utility index and VAS demonstrated known-group validity and convergent and divergent validity, and presented sufficient test-retest reliability (intraclass correlation = 0.72 to 0.83). The English version was responsive to changes in performance status. The Chinese version was responsive to decline in performance status, but there was no conclusive evidence about its responsiveness to improvement in performance status. In the comparison analyses of the utility index and VAS between the two language versions, borderline results were obtained, and equivalence cannot be definitely confirmed. The five-level EQ-5D is valid, responsive, and reliable in assessing health outcome of breast cancer patients. The English and Chinese versions provide comparable measurement results.
Gebreyesus, Grum; Lund, Mogens S; Buitenhuis, Bart; Bovenhuis, Henk; Poulsen, Nina A; Janss, Luc G
2017-12-05
Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls. Single-nucleotide polymorphisms (SNPs), from 50K SNP arrays, were grouped into non-overlapping genome segments. A segment was defined as one SNP, or a group of 50, 100, or 200 adjacent SNPs, or one chromosome, or the whole genome. Traditional univariate and bivariate genomic best linear unbiased prediction (GBLUP) models were also run for comparison. Reliabilities were calculated through a resampling strategy and using deterministic formula. BayesAS models improved prediction reliability for most of the traits compared to GBLUP models and this gain depended on segment size and genetic architecture of the traits. The gain in prediction reliability was especially marked for the protein composition traits β-CN, κ-CN and β-LG, for which prediction reliabilities were improved by 49 percentage points on average using the MT-BayesAS model with a 100-SNP segment size compared to the bivariate GBLUP. Prediction reliabilities were highest with the BayesAS model that uses a 100-SNP segment size. The bivariate versions of our BayesAS models resulted in extra gains of up to 6% in prediction reliability compared to the univariate versions. Substantial improvement in prediction reliability was possible for most of the traits related to milk protein composition using our novel BayesAS models. Grouping adjacent SNPs into segments provided enhanced information to estimate parameters and allowing the segments to have different (co)variances helped disentangle heterogeneous (co)variances across the genome.
NASA Astrophysics Data System (ADS)
Goldberg, Robert R.; Goldberg, Michael R.
1999-05-01
A previous paper by the authors presented an algorithm that successfully segmented organs grown in vitro from their surroundings. It was noticed that one difficulty in standard dyeing techniques for the analysis of contours in organs was due to the fact that the antigen necessary to bind with the fluorescent dye was not uniform throughout the cell borders. To address these concerns, a new fluorescent technique was utilized. A transgenic mouse line was genetically engineered utilizing the hoxb7/gfp (green fluorescent protein). Whereas the original technique (fixed and blocking) required a numerous number of noise removal filtering and sophisticated segmentation techniques, segmentation on the GFP kidney required only an adaptive binary threshold technique which yielded excellent results without the need for specific noise reduction. This is important for tracking the growth of kidney development through time.
Segmentation of culturally diverse visitors' values in forest recreation management
C. Li; H.C. Zinn; G.E. Chick; J.D. Absher; A.R. Graefe; Y. Hsu
2007-01-01
The purpose of this study was to examine the potential utility of HOFSTEDEâs measure of cultural values (1980) for group segmentation in an ethnically diverse population in a forest recreation context, and to validate the values segmentation, if any, via socio-demographic and service quality related variables. In 2002, the visitors to the Angeles National Forest (ANF)...
Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis
NASA Astrophysics Data System (ADS)
Che, E.; Olsen, M. J.
2017-09-01
Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.
NASA Astrophysics Data System (ADS)
Remmele, Steffen; Ritzerfeld, Julia; Nickel, Walter; Hesser, Jürgen
2011-03-01
RNAi-based high-throughput microscopy screens have become an important tool in biological sciences in order to decrypt mostly unknown biological functions of human genes. However, manual analysis is impossible for such screens since the amount of image data sets can often be in the hundred thousands. Reliable automated tools are thus required to analyse the fluorescence microscopy image data sets usually containing two or more reaction channels. The herein presented image analysis tool is designed to analyse an RNAi screen investigating the intracellular trafficking and targeting of acylated Src kinases. In this specific screen, a data set consists of three reaction channels and the investigated cells can appear in different phenotypes. The main issue of the image processing task is an automatic cell segmentation which has to be robust and accurate for all different phenotypes and a successive phenotype classification. The cell segmentation is done in two steps by segmenting the cell nuclei first and then using a classifier-enhanced region growing on basis of the cell nuclei to segment the cells. The classification of the cells is realized by a support vector machine which has to be trained manually using supervised learning. Furthermore, the tool is brightness invariant allowing different staining quality and it provides a quality control that copes with typical defects during preparation and acquisition. A first version of the tool has already been successfully applied for an RNAi-screen containing three hundred thousand image data sets and the SVM extended version is designed for additional screens.
Phillips, Lawrence; Pearl, Lisa
2015-11-01
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. Copyright © 2015 Cognitive Science Society, Inc.
ERIC Educational Resources Information Center
Lin, Lan-Ping; Lin, Jin-Ding
2013-01-01
Burnout has been considered important to understand the well-being of people who work with individuals with intellectual disabilities (ID) and developmental disabilities (DD). To identify personal and workplace characteristics associated with burnout, this study aimed to utilize the Chinese version of the Copenhagen Burnout Inventory to provide a…
Heritability and reliability of automatically segmented human hippocampal formation subregions
Whelan, Christopher D.; Hibar, Derrek P.; van Velzen, Laura S.; Zannas, Anthony S.; Carrillo-Roa, Tania; McMahon, Katie; Prasad, Gautam; Kelly, Sinéad; Faskowitz, Joshua; deZubiracay, Greig; Iglesias, Juan E.; van Erp, Theo G.M.; Frodl, Thomas; Martin, Nicholas G.; Wright, Margaret J.; Jahanshad, Neda; Schmaal, Lianne; Sämann, Philipp G.; Thompson, Paul M.
2016-01-01
The human hippocampal formation can be divided into a set of cytoarchitecturally and functionally distinct subregions, involved in different aspects of memory formation. Neuroanatomical disruptions within these subregions are associated with several debilitating brain disorders including Alzheimer’s disease, major depression, schizophrenia, and bipolar disorder. Multi-center brain imaging consortia, such as the Enhancing Neuro Imaging Genetics through Meta-Analysis (ENIGMA) consortium, are interested in studying disease effects on these subregions, and in the genetic factors that affect them. For large-scale studies, automated extraction and subsequent genomic association studies of these hippocampal subregion measures may provide additional insight. Here, we evaluated the test–retest reliability and transplatform reliability (1.5 T versus 3 T) of the subregion segmentation module in the FreeSurfer software package using three independent cohorts of healthy adults, one young (Queensland Twins Imaging Study, N = 39), another elderly (Alzheimer’s Disease Neuroimaging Initiative, ADNI-2, N = 163) and another mixed cohort of healthy and depressed participants (Max Planck Institute, MPIP, N = 598). We also investigated agreement between the most recent version of this algorithm (v6.0) and an older version (v5.3), again using the ADNI-2 and MPIP cohorts in addition to a sample from the Netherlands Study for Depression and Anxiety (NESDA) (N = 221). Finally, we estimated the heritability (h2) of the segmented subregion volumes using the full sample of young, healthy QTIM twins (N = 728). Test–retest reliability was high for all twelve subregions in the 3 T ADNI-2 sample (intraclass correlation coefficient (ICC) = 0.70–0.97) and moderate-to-high in the 4 T QTIM sample (ICC = 0.5–0.89). Transplatform reliability was strong for eleven of the twelve subregions (ICC = 0.66–0.96); however, the hippocampal fissure was not consistently reconstructed across 1.5 T and 3 T field strengths (ICC = 0.47–0.57). Between-version agreement was moderate for the hippocampal tail, subiculum and presubiculum (ICC = 0.78–0.84; Dice Similarity Coefficient (DSC) = 0.55–0.70), and poor for all other subregions (ICC = 0.34–0.81; DSC = 0.28–0.51). All hippocampal subregion volumes were highly heritable (h2 = 0.67–0.91). Our findings indicate that eleven of the twelve human hippocampal subregions segmented using FreeSurfer version 6.0 may serve as reliable and informative quantitative phenotypes for future multi-site imaging genetics initiatives such as those of the ENIGMA consortium. PMID:26747746
Callan, Richard S; Cooper, Jeril R; Young, Nancy B; Mollica, Anthony G; Furness, Alan R; Looney, Stephen W
2015-06-01
The problems associated with intra- and interexaminer reliability when assessing preclinical performance continue to hinder dental educators' ability to provide accurate and meaningful feedback to students. Many studies have been conducted to evaluate the validity of utilizing various technologies to assist educators in achieving that goal. The purpose of this study was to compare two different versions of E4D Compare software to determine if either could be expected to deliver consistent and reliable comparative results, independent of the individual utilizing the technology. Five faculty members obtained E4D digital images of students' attempts (sample model) at ideal gold crown preparations for tooth #30 performed on typodont teeth. These images were compared to an ideal (master model) preparation utilizing two versions of E4D Compare software. The percent correlations between and within these faculty members were recorded and averaged. The intraclass correlation coefficient was used to measure both inter- and intrarater agreement among the examiners. The study found that using the older version of E4D Compare did not result in acceptable intra- or interrater agreement among the examiners. However, the newer version of E4D Compare, when combined with the Nevo scanner, resulted in a remarkable degree of agreement both between and within the examiners. These results suggest that consistent and reliable results can be expected when utilizing this technology under the protocol described in this study.
Yanovich, Polina; Isenhower, Robert W.; Sage, Jacob; Torres, Elizabeth B.
2013-01-01
Background Often in Parkinson’s disease (PD) motor-related problems overshadow latent non-motor deficits as it is difficult to dissociate one from the other with commonly used observational inventories. Here we ask if the variability patterns of hand speed and acceleration would be revealing of deficits in spatial-orientation related decisions as patients performed a familiar reach-to-grasp task. To this end we use spatial-orientation priming which normally facilitates motor-program selection and asked whether in PD spatial-orientation priming helps or hinders performance. Methods To dissociate spatial-orientation- and motor-related deficits participants performed two versions of the task. The biomechanical version (DEFAULT) required the same postural- and hand-paths as the orientation-priming version (primed-UP). Any differences in the patients here could not be due to motor issues as the tasks were biomechanically identical. The other priming version (primed-DOWN) however required additional spatial and postural processing. We assessed in all three cases both the forward segment deliberately aimed towards the spatial-target and the retracting segment, spontaneously bringing the hand to rest without an instructed goal. Results and Conclusions We found that forward and retracting segments belonged in two different statistical classes according to the fluctuations of speed and acceleration maxima. Further inspection revealed conservation of the forward (voluntary) control of speed but in PD a discontinuity of this control emerged during the uninstructed retractions which was absent in NC. Two PD groups self-emerged: one group in which priming always affected the retractions and the other in which only the more challenging primed-DOWN condition was affected. These PD-groups self-formed according to the speed variability patterns, which systematically changed along a gradient that depended on the priming, thus dissociating motor from spatial-orientation issues. Priming did not facilitate the motor task in PD but it did reveal a breakdown in the spatial-orientation decision that was independent of the motor-postural path. PMID:23843963
Celik, Onur; Eskiizmir, Gorkem; Pabuscu, Yuksel; Ulkumen, Burak; Toker, Gokce Tanyeri
The exact etiology of Bell's palsy still remains obscure. The only authenticated finding is inflammation and edema of the facial nerve leading to entrapment inside the facial canal. To identify if there is any relationship between the grade of Bell's palsy and diameter of the facial canal, and also to study any possible anatomic predisposition of facial canal for Bell's palsy including parts which have not been studied before. Medical records and temporal computed tomography scans of 34 patients with Bell's palsy were utilized in this retrospective clinical study. Diameters of both facial canals (affected and unaffected) of each patient were measured at labyrinthine segment, geniculate ganglion, tympanic segment, second genu, mastoid segment and stylomastoid foramen. The House-Brackmann (HB) scale of each patient at presentation and 3 months after the treatment was evaluated from their medical records. The paired samples t-test and Wilcoxon signed-rank test were used for comparison of width between the affected side and unaffected side. The Wilcoxon signed-rank test was also used for evaluation of relationship between the diameter of facial canal and the grade of the Bell's palsy. Significant differences were established at a level of p=0.05 (IBM SPSS Statistics for Windows, Version 21.0.; Armonk, NY, IBM Corp). Thirty-four patients - 16 females, 18 males; mean age±Standard Deviation, 40.3±21.3 - with Bell's palsy were included in the study. According to the HB facial nerve grading system; 8 patients were grade V, 6 were grade IV, 11 were grade III, 8 were grade II and 1 patient was grade I. The mean width at the labyrinthine segment of the facial canal in the affected temporal bone was significantly smaller than the equivalent in the unaffected temporal bone (p=0.00). There was no significant difference between the affected and unaffected temporal bones at the geniculate ganglion (p=0.87), tympanic segment (p=0.66), second genu (p=0.62), mastoid segment (p=0.67) and stylomastoid foramen (p=0.16). We did not find any relationship between the HB grade and the facial canal diameter at the level of labyrinthine segment (p=0.41), tympanic segment (p=0.12), mastoid segment (p=0.14), geniculate ganglion (p=0.13) and stylomastoid foramen (p=0.44), while we found significant relationship at the level of second genu (p=0.02). We found the diameter of labyrinthine segment of facial canal as an anatomic risk factor for Bell's palsy. We also found significant relationship between the HB grade and FC diameter at the level of second genu. Future studies (MRI-CT combined or 3D modeling) are needed to promote this possible relevance especially at second genu. Thus, in the future it may be possible to selectively decompress particular segments in high grade BP patients. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Segmented media and medium damping in microwave assisted magnetic recording
NASA Astrophysics Data System (ADS)
Bai, Xiaoyu; Zhu, Jian-Gang
2018-05-01
In this paper, we present a methodology of segmented media stack design for microwave assisted magnetic recording. Through micro-magnetic modeling, it is demonstrated that an optimized media segmentation is able to yield high signal-to-noise ratio even with limited ac field power. With proper segmentation, the ac field power could be utilized more efficiently and this can alleviate the requirement for medium damping which has been previously considered a critical limitation. The micro-magnetic modeling also shows that with segmentation optimization, recording signal-to-noise ratio can have very little dependence on damping for different recording linear densities.
Monitoring the evolutionary aspect of the Gene Ontology to enhance predictability and usability.
Park, Jong C; Kim, Tak-eun; Park, Jinah
2008-04-11
Much effort is currently made to develop the Gene Ontology (GO). Due to the dynamic nature of information it addresses, GO undergoes constant updates whose results are released at regular intervals as separate versions. Although there are a large number of computational tools to aid the development of GO, they are operating on a particular version of GO, making it difficult for GO curators to anticipate the full impact of particular changes along the time axis on a larger scale. We present a method for tapping into such an evolutionary aspect of GO, by making it possible to keep track of important temporal changes to any of the terms and relations of GO and by consequently making it possible to recognize associated trends. We have developed visualization methods for viewing the changes between two different versions of GO by constructing a colour-coded layered graph. The graph shows both versions of GO with highlights to those GO terms that are added, removed and modified between the two versions. Focusing on a specific GO term or terms of interest over a period, we demonstrate the utility of our system that can be used to make useful hypotheses about the cause of the evolution and to provide new insights into more complex changes. GO undergoes fast evolutionary changes. A snapshot of GO, as presented by each version of GO alone, overlooks such evolutionary aspects, and consequently limits the utilities of GO. The method that highlights the differences of consecutive versions or two different versions of an evolving ontology with colour-coding enhances the utility of GO for users as well as for developers. To the best of our knowledge, this is the first proposal to visualize the evolutionary aspect of GO.
2017-01-01
Drosophila segmentation is a well-established paradigm for developmental pattern formation. However, the later stages of segment patterning, regulated by the “pair-rule” genes, are still not well understood at the system level. Building on established genetic interactions, I construct a logical model of the Drosophila pair-rule system that takes into account the demonstrated stage-specific architecture of the pair-rule gene network. Simulation of this model can accurately recapitulate the observed spatiotemporal expression of the pair-rule genes, but only when the system is provided with dynamic “gap” inputs. This result suggests that dynamic shifts of pair-rule stripes are essential for segment patterning in the trunk and provides a functional role for observed posterior-to-anterior gap domain shifts that occur during cellularisation. The model also suggests revised patterning mechanisms for the parasegment boundaries and explains the aetiology of the even-skipped null mutant phenotype. Strikingly, a slightly modified version of the model is able to pattern segments in either simultaneous or sequential modes, depending only on initial conditions. This suggests that fundamentally similar mechanisms may underlie segmentation in short-germ and long-germ arthropods. PMID:28953896
Battery energy storage market feasibility study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kraft, S.; Akhil, A.
1997-07-01
Under the sponsorship of the Department of Energy`s Office of Utility Technologies, the Energy Storage Systems Analysis and Development Department at Sandia National Laboratories (SNL) contracted Frost and Sullivan to conduct a market feasibility study of energy storage systems. The study was designed specifically to quantify the energy storage market for utility applications. This study was based on the SNL Opportunities Analysis performed earlier. Many of the groups surveyed, which included electricity providers, battery energy storage vendors, regulators, consultants, and technology advocates, viewed energy storage as an important enabling technology to enable increased use of renewable energy and as amore » means to solve power quality and asset utilization issues. There are two versions of the document available, an expanded version (approximately 200 pages, SAND97-1275/2) and a short version (approximately 25 pages, SAND97-1275/1).« less
SRB Processing Facilities Media Event
2016-03-01
Members of the news media watch as a crane is used to move one of two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket to a test stand in the Rotation, Processing and Surge Facility at NASA’s Kennedy Space Center in Florida. Inside the RPSF, the Ground Systems Development and Operations Program and Jacobs Engineering, on the Test and Operations Support Contract, will prepare the booster segments, which are inert, for a series of lifts, moves and stacking operations to prepare for Exploration Mission-1, deep-space missions and the journey to Mars.
Labor Market Structure and Salary Determination among Professional Basketball Players.
ERIC Educational Resources Information Center
Wallace, Michael
1988-01-01
The author investigates the labor market structure and determinants of salaries for professional basketball players. An expanded version of the resource perspective is used. A three-tiered model of labor market segmentation is revealed for professional basketball players, but other variables also are important in salary determination. (Author/CH)
Ensemble Semi-supervised Frame-work for Brain Magnetic Resonance Imaging Tissue Segmentation.
Azmi, Reza; Pishgoo, Boshra; Norozi, Narges; Yeganeh, Samira
2013-04-01
Brain magnetic resonance images (MRIs) tissue segmentation is one of the most important parts of the clinical diagnostic tools. Pixel classification methods have been frequently used in the image segmentation with two supervised and unsupervised approaches up to now. Supervised segmentation methods lead to high accuracy, but they need a large amount of labeled data, which is hard, expensive, and slow to obtain. Moreover, they cannot use unlabeled data to train classifiers. On the other hand, unsupervised segmentation methods have no prior knowledge and lead to low level of performance. However, semi-supervised learning which uses a few labeled data together with a large amount of unlabeled data causes higher accuracy with less trouble. In this paper, we propose an ensemble semi-supervised frame-work for segmenting of brain magnetic resonance imaging (MRI) tissues that it has been used results of several semi-supervised classifiers simultaneously. Selecting appropriate classifiers has a significant role in the performance of this frame-work. Hence, in this paper, we present two semi-supervised algorithms expectation filtering maximization and MCo_Training that are improved versions of semi-supervised methods expectation maximization and Co_Training and increase segmentation accuracy. Afterward, we use these improved classifiers together with graph-based semi-supervised classifier as components of the ensemble frame-work. Experimental results show that performance of segmentation in this approach is higher than both supervised methods and the individual semi-supervised classifiers.
Elleithy, Khaled; Elleithy, Abdelrahman
2018-01-01
Eye exam can be as efficacious as physical one in determining health concerns. Retina screening can be the very first clue for detecting a variety of hidden health issues including pre-diabetes and diabetes. Through the process of clinical diagnosis and prognosis; ophthalmologists rely heavily on the binary segmented version of retina fundus image; where the accuracy of segmented vessels, optic disc, and abnormal lesions extremely affects the diagnosis accuracy which in turn affect the subsequent clinical treatment steps. This paper proposes an automated retinal fundus image segmentation system composed of three segmentation subsystems follow same core segmentation algorithm. Despite of broad difference in features and characteristics; retinal vessels, optic disc, and exudate lesions are extracted by each subsystem without the need for texture analysis or synthesis. For sake of compact diagnosis and complete clinical insight, our proposed system can detect these anatomical structures in one session with high accuracy even in pathological retina images. The proposed system uses a robust hybrid segmentation algorithm combines adaptive fuzzy thresholding and mathematical morphology. The proposed system is validated using four benchmark datasets: DRIVE and STARE (vessels), DRISHTI-GS (optic disc), and DIARETDB1 (exudates lesions). Competitive segmentation performance is achieved, outperforming a variety of up-to-date systems and demonstrating the capacity to deal with other heterogeneous anatomical structures. PMID:29888146
Hippocampus segmentation using locally weighted prior based level set
NASA Astrophysics Data System (ADS)
Achuthan, Anusha; Rajeswari, Mandava
2015-12-01
Segmentation of hippocampus in the brain is one of a major challenge in medical image segmentation due to its' imaging characteristics, with almost similar intensity between another adjacent gray matter structure, such as amygdala. The intensity similarity has causes the hippocampus to have weak or fuzzy boundaries. With this main challenge being demonstrated by hippocampus, a segmentation method that relies on image information alone may not produce accurate segmentation results. Therefore, it is needed an assimilation of prior information such as shape and spatial information into existing segmentation method to produce the expected segmentation. Previous studies has widely integrated prior information into segmentation methods. However, the prior information has been utilized through a global manner integration, and this does not reflect the real scenario during clinical delineation. Therefore, in this paper, a locally integrated prior information into a level set model is presented. This work utilizes a mean shape model to provide automatic initialization for level set evolution, and has been integrated as prior information into the level set model. The local integration of edge based information and prior information has been implemented through an edge weighting map that decides at voxel level which information need to be observed during a level set evolution. The edge weighting map shows which corresponding voxels having sufficient edge information. Experiments shows that the proposed integration of prior information locally into a conventional edge-based level set model, known as geodesic active contour has shown improvement of 9% in averaged Dice coefficient.
Evaluating the Potential of Commercial GIS for Accelerator Configuration Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
T.L. Larrieu; Y.R. Roblin; K. White
2005-10-10
The Geographic Information System (GIS) is a tool used by industries needing to track information about spatially distributed assets. A water utility, for example, must know not only the precise location of each pipe and pump, but also the respective pressure rating and flow rate of each. In many ways, an accelerator such as CEBAF (Continuous Electron Beam Accelerator Facility) can be viewed as an ''electron utility''. Whereas the water utility uses pipes and pumps, the ''electron utility'' uses magnets and RF cavities. At Jefferson lab we are exploring the possibility of implementing ESRI's ArcGIS as the framework for buildingmore » an all-encompassing accelerator configuration database that integrates location, configuration, maintenance, and connectivity details of all hardware and software. The possibilities of doing so are intriguing. From the GIS, software such as the model server could always extract the most-up-to-date layout information maintained by the Survey & Alignment for lattice modeling. The Mechanical Engineering department could use ArcGIS tools to generate CAD drawings of machine segments from the same database. Ultimately, the greatest benefit of the GIS implementation could be to liberate operators and engineers from the limitations of the current system-by-system view of machine configuration and allow a more integrated regional approach. The commercial GIS package provides a rich set of tools for database-connectivity, versioning, distributed editing, importing and exporting, and graphical analysis and querying, and therefore obviates the need for much custom development. However, formidable challenges to implementation exist and these challenges are not only technical and manpower issues, but also organizational ones. The GIS approach would crosscut organizational boundaries and require departments, which heretofore have had free reign to manage their own data, to cede some control and agree to a centralized framework.« less
New Open-Source Version of FLORIS Released | News | NREL
New Open-Source Version of FLORIS Released New Open-Source Version of FLORIS Released January 26 , 2018 National Renewable Energy Laboratory (NREL) researchers recently released an updated open-source simplified and documented. Because of the living, open-source nature of the newly updated utility, NREL
SuBSENSE: a universal change detection method with local adaptive sensitivity.
St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert
2015-01-01
Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.
GPU-based relative fuzzy connectedness image segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhuge Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.
2013-01-15
Purpose:Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an Script-Small-L {sub {infinity}}-based energy, are known as relative fuzzymore » connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8 Multiplication-Sign , 22.9 Multiplication-Sign , 20.9 Multiplication-Sign , and 17.5 Multiplication-Sign , correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.« less
NASA Astrophysics Data System (ADS)
Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Shimada, Mitsuo; Iinuma, Gen
2017-03-01
In abdominal disease diagnosis and various abdominal surgeries planning, segmentation of abdominal blood vessel (ABVs) is a very imperative task. Automatic segmentation enables fast and accurate processing of ABVs. We proposed a fully automatic approach for segmenting ABVs through contrast enhanced CT images by a hybrid of 3D region growing and 4D curvature analysis. The proposed method comprises three stages. First, candidates of bone, kidneys, ABVs and heart are segmented by an auto-adapted threshold. Second, bone is auto-segmented and classified into spine, ribs and pelvis. Third, ABVs are automatically segmented in two sub-steps: (1) kidneys and abdominal part of the heart are segmented, (2) ABVs are segmented by a hybrid approach that integrates a 3D region growing and 4D curvature analysis. Results are compared with two conventional methods. Results show that the proposed method is very promising in segmenting and classifying bone, segmenting whole ABVs and may have potential utility in clinical use.
NASA Astrophysics Data System (ADS)
Abolhasani, Milad
Flowing trains of uniformly sized bubbles/droplets (i.e., segmented flows) and the associated mass transfer enhancement over their single-phase counterparts have been studied extensively during the past fifty years. Although the scaling behaviour of segmented flow formation is increasingly well understood, the predictive adjustment of the desired flow characteristics that influence the mixing and residence times, remains a challenge. Currently, a time consuming, slow and often inconsistent manual manipulation of experimental conditions is required to address this task. In my thesis, I have overcome the above-mentioned challenges and developed an experimental strategy that for the first time provided predictive control over segmented flows in a hands-off manner. A computer-controlled platform that consisted of a real-time image processing module within an integral controller, a silicon-based microreactor and automated fluid delivery technique was designed, implemented and validated. In a first part of my thesis I utilized this approach for the automated screening of physical mass transfer and solubility characteristics of carbon dioxide (CO2) in a physical solvent at a well-defined temperature and pressure and a throughput of 12 conditions per hour. Second, by applying the segmented flow approach to a recently discovered CO2 chemical absorbent, frustrated Lewis pairs (FLPs), I determined the thermodynamic characteristics of the CO2-FLP reaction. Finally, the segmented flow approach was employed for characterization and investigation of CO2-governed liquid-liquid phase separation process. The second part of my thesis utilized the segmented flow platform for the preparation and shape control of high quality colloidal nanomaterials (e.g., CdSe/CdS) via the automated control of residence times up to approximately 5 minutes. By introducing a novel oscillatory segmented flow concept, I was able to further extend the residence time limitation to 24 hours. A case study of a slow candidate reaction, the etching of gold nanorods during up to five hours, served to illustrate the utility of oscillatory segmented flows in assessing the shape evolution of colloidal nanomaterials on-chip via continuous optical interrogation at only one sensing location. The developed cruise control strategy will enable plug'n play operation of segmented flows in applications that include flow chemistry, material synthesis and in-flow analysis and screening.
NASA Astrophysics Data System (ADS)
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and posterior eye segment as well as in skin imaging. The new estimator shows superior performance and also shows clearer image contrast.
Understanding profitability: Why some customers are hot and others are not
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sioshansi, F.P.
Gone are the days when utilities would boast how many new customers were being added to their system annually-regardless of whether they were in fact profitable to serve or not-as if bigger was always better. In a not too distant future, and with the liberalization of the business environment, some utilities may no longer wish to serve certain customers on their systems, while at the same time aggressively wooing other customers. With the anticipated arrival of competition and erosion of utility franchise service areas, the electric power industry will gradually evolve into a mode where customers will be segmented intomore » finer groups and evaluated based on their expected profit margins-theoretically the difference between the revenues expected from them and the cost of serving them. Understanding this basic concept, and the mastery of the art of arriving at the correct profit margin for each market segment, will be essential to overall business profitability and survival in the future. In practice, however, many utilities are ill-prepared to accomplish such fundamental analyses correctly and consistently because they do not have the correct analytical framework, the right information, or the right tools to perform the analysis. This paper will outline the fundamentals of market segmentation and evaluating customer profitability. It will also illustrate how to balance the cost of serving a customer with the revenues derived to produce a {open_quotes}reasonable{close_quotes} profit margin in each market segment. EPRI has developed a software tool specifically designed to assist utility analysts perform this type of work. Other ongoing research in the area of profitability analysis is also described.« less
Battery energy storage market feasibility study -- Expanded report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kraft, S.; Akhil, A.
1997-09-01
Under the sponsorship of the US Department of Energy`s Office of Utility Technologies, the Energy Storage Systems Analysis and Development Department at Sandia National Laboratories (SNL) contracted Frost and Sullivan to conduct a market feasibility study of energy storage systems. The study was designed specifically to quantify the battery energy storage market for utility applications. This study was based on the SNL Opportunities Analysis performed earlier. Many of the groups surveyed, which included electricity providers, battery energy storage vendors, regulators, consultants, and technology advocates, viewed battery storage as an important technology to enable increased use of renewable energy and asmore » a means to solve power quality and asset utilization issues. There are two versions of the document available, an expanded version (approximately 200 pages, SAND97-1275/2) and a short version (approximately 25 pages, SAND97-1275/1).« less
Lim, Ik Soo; Leek, E Charles
2012-07-01
Previous empirical studies have shown that information along visual contours is known to be concentrated in regions of high magnitude of curvature, and, for closed contours, segments of negative curvature (i.e., concave segments) carry greater perceptual relevance than corresponding regions of positive curvature (i.e., convex segments). Lately, Feldman and Singh (2005, Psychological Review, 112, 243-252) proposed a mathematical derivation to yield information content as a function of curvature along a contour. Here, we highlight several fundamental errors in their derivation and in its associated implementation, which are problematic in both mathematical and psychological senses. Instead, we propose an alternative mathematical formulation for information measure of contour curvature that addresses these issues. Additionally, unlike in previous work, we extend this approach to 3-dimensional (3D) shape by providing a formal measure of information content for surface curvature and outline a modified version of the minima rule relating to part segmentation using curvature in 3D shape. Copyright 2012 APA, all rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The model is designed to enable decision makers to compare the economics of geothermal projects with the economics of alternative energy systems at an early stage in the decision process. The geothermal engineering and economic feasibility computer model (GEEF) is written in FORTRAN IV language and can be run on a mainframe or a mini-computer system. An abbreviated version of the model is being developed for usage in conjunction with a programmable desk calculator. The GEEF model has two main segments, namely (i) the engineering design/cost segment and (ii) the economic analysis segment. In the engineering segment, the model determinesmore » the numbers of production and injection wells, heat exchanger design, operating parameters for the system, requirement of supplementary system (to augment the working fluid temperature if the resource temperature is not sufficiently high), and the fluid flow rates. The model can handle single stage systems as well as two stage cascaded systems in which the second stage may involve a space heating application after a process heat application in the first stage.« less
NASA Astrophysics Data System (ADS)
Glass, John O.; Reddick, Wilburn E.; Reeves, Cara; Pui, Ching-Hon
2004-05-01
Reliably quantifying therapy-induced leukoencephalopathy in children treated for cancer is a challenging task due to its varying MR properties and similarity to normal tissues and imaging artifacts. T1, T2, PD, and FLAIR images were analyzed for a subset of 15 children from an institutional protocol for the treatment of acute lymphoblastic leukemia. Three different analysis techniques were compared to examine improvements in the segmentation accuracy of leukoencephalopathy versus manual tracings by two expert observers. The first technique utilized no apriori information and a white matter mask based on the segmentation of the first serial examination of each patient. MR images were then segmented with a Kohonen Self-Organizing Map. The other two techniques combine apriori maps from the ICBM atlas spatially normalized to each patient and resliced using SPM99 software. The apriori maps were included as input and a gradient magnitude threshold calculated on the FLAIR images was also utilized. The second technique used a 2-dimensional threshold, while the third algorithm utilized a 3-dimensional threshold. Kappa values were compared for the three techniques to each observer, and improvements were seen with each addition to the original algorithm (Observer 1: 0.651, 0.653, 0.744; Observer 2: 0.603, 0.615, 0.699).
Accurate positioning of long, flexible ARM's (Articulated Robotic Manipulator)
NASA Technical Reports Server (NTRS)
Malachowski, Michael J.
1988-01-01
An articulated robotic manipulator (ARM) system is being designed for space applications. Work being done on a concept utilizing an infinitely stiff laser beam for position reference is summarized. The laser beam is projected along the segments of the ARM, and the position is sensed by the beam rider modules (BRM) mounted on the distal ends of the segments. The BRM concept is the heart of the system. It utilizes a combination of lateral displacements and rotational and distance measurement sensors. These determine the relative position of the two ends of the segments with respect to each other in six degrees of freedom. The BRM measurement devices contain microprocessor controlled data acquisition and active positioning components. An indirect adaptive controller is used to accurately control the position of the ARM.
Dynamics of uniaxially oriented elastomers using dielectric spectroscopy
NASA Astrophysics Data System (ADS)
Lee, Hyungki; Fragiadakis, Daniel; Martin, Darren; Runt, James
2009-03-01
We summarize our initial dielectric spectroscopy investigation of the dynamics of oriented segmented polyurethanes and crosslinked polyisoprene elastomers. A specially designed uniaxial stretching rig is used to control the draw ratio, and the electric field is applied normal to the draw direction. For the segmented PUs, we observe a dramatic reduction in relaxation strength of the soft phase segmental process with increasing extension ratio, accompanied by a modest decrease in relaxation frequency. Crosslinking of the polyisoprene was accomplished with dicumyl peroxide and the dynamics of uncrosslinked and crosslinked versions are investigated in the undrawn state and at different extension ratios. Complimentary analysis of the crosslinked PI is conducted with wide angle X- ray diffraction to examine possible strain-induced crystallization, DSC, and swelling experiments. Quantitative analysis of relaxation strengths and shapes as a function of draw ratio will be discussed.
Reconceptualizing Social Work Behaviors from a Human Rights Perspective
ERIC Educational Resources Information Center
Steen, Julie A.
2018-01-01
Although the human rights philosophy has relevance for many segments of the social work curriculum, the latest version of accreditation standards only includes a few behaviors specific to human rights. This deficit can be remedied by incorporating innovations found in the social work literature, which provides a wealth of material for…
MILE Curriculum [and Nine CD-ROM Lessons].
ERIC Educational Resources Information Center
Reiman, John
This curriculum on money management skills for deaf adolescent and young adult students is presented on nine video CD-ROMs as well as in a print version. The curriculum was developed following a survey of the needs of school and rehabilitation programs. It was also piloted and subsequently revised. Each teaching segment is presented in sign…
Automated tissue segmentation of MR brain images in the presence of white matter lesions.
Valverde, Sergi; Oliver, Arnau; Roura, Eloy; González-Villà, Sandra; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Rovira, Àlex; Lladó, Xavier
2017-01-01
Over the last few years, the increasing interest in brain tissue volume measurements on clinical settings has led to the development of a wide number of automated tissue segmentation methods. However, white matter lesions are known to reduce the performance of automated tissue segmentation methods, which requires manual annotation of the lesions and refilling them before segmentation, which is tedious and time-consuming. Here, we propose a new, fully automated T1-w/FLAIR tissue segmentation approach designed to deal with images in the presence of WM lesions. This approach integrates a robust partial volume tissue segmentation with WM outlier rejection and filling, combining intensity and probabilistic and morphological prior maps. We evaluate the performance of this method on the MRBrainS13 tissue segmentation challenge database, which contains images with vascular WM lesions, and also on a set of Multiple Sclerosis (MS) patient images. On both databases, we validate the performance of our method with other state-of-the-art techniques. On the MRBrainS13 data, the presented approach was at the time of submission the best ranked unsupervised intensity model method of the challenge (7th position) and clearly outperformed the other unsupervised pipelines such as FAST and SPM12. On MS data, the differences in tissue segmentation between the images segmented with our method and the same images where manual expert annotations were used to refill lesions on T1-w images before segmentation were lower or similar to the best state-of-the-art pipeline incorporating automated lesion segmentation and filling. Our results show that the proposed pipeline achieved very competitive results on both vascular and MS lesions. A public version of this approach is available to download for the neuro-imaging community. Copyright © 2016 Elsevier B.V. All rights reserved.
Risk Assessment Update: Russian Segment
NASA Technical Reports Server (NTRS)
Christiansen, Eric; Lear, Dana; Hyde, James; Bjorkman, Michael; Hoffman, Kevin
2012-01-01
BUMPER-II version 1.95j source code was provided to RSC-E- and Khrunichev at January 2012 MMOD TIM in Moscow. MEMCxP and ORDEM 3.0 environments implemented as external data files. NASA provided a sample ORDEM 3.0 g."key" & "daf" environment file set for demonstration and benchmarking BUMPER -II v1.95j installation at the Jan-12 TIM. ORDEM 3.0 has been completed and is currently in beta testing. NASA will provide a preliminary set of ORDEM 3.0 ".key" & ".daf" environment files for the years 2012 through 2028. Bumper output files produced using the new ORDEM 3.0 data files are intended for internal use only, not for requirements verification. Output files will contain these words ORDEM FILE DESCRIPTION = PRELIMINARY VERSION: not for production. The projectile density term in many BUMPER-II ballistic limit equations will need to be updated. Cube demo scripts and output files delivered at the Jan-12 TIM have been updated for the new ORDEM 3.0 data files. Risk assessment results based on ORDEM 3.0 and MEM will be presented for the Russian Segment (RS) of ISS.
Fotopoulos, Christos; Krystallis, Athanasios; Vassallo, Marco; Pagiaslis, Anastasios
2009-02-01
Recognising the need for a more statistically robust instrument to investigate general food selection determinants, the research validates and confirms Food Choice Questionnaire (FCQ's) factorial design, develops ad hoc a more robust FCQ version and tests its ability to discriminate between consumer segments in terms of the importance they assign to the FCQ motivational factors. The original FCQ appears to represent a comprehensive and reliable research instrument. However, the empirical data do not support the robustness of its 9-factorial design. On the other hand, segmentation results at the subpopulation level based on the enhanced FCQ version bring about an optimistic message for the FCQ's ability to predict food selection behaviour. The paper concludes that some of the basic components of the original FCQ can be used as a basis for a new general food motivation typology. The development of such a new instrument, with fewer, of higher abstraction FCQ-based dimensions and fewer items per dimension, is a right step forward; yet such a step should be theory-driven, while a rigorous statistical testing across and within population would be necessary.
Zhou, Yongxin; Bai, Jing
2007-01-01
A framework that combines atlas registration, fuzzy connectedness (FC) segmentation, and parametric bias field correction (PABIC) is proposed for the automatic segmentation of brain magnetic resonance imaging (MRI). First, the atlas is registered onto the MRI to initialize the following FC segmentation. Original techniques are proposed to estimate necessary initial parameters of FC segmentation. Further, the result of the FC segmentation is utilized to initialize a following PABIC algorithm. Finally, we re-apply the FC technique on the PABIC corrected MRI to get the final segmentation. Thus, we avoid expert human intervention and provide a fully automatic method for brain MRI segmentation. Experiments on both simulated and real MRI images demonstrate the validity of the method, as well as the limitation of the method. Being a fully automatic method, it is expected to find wide applications, such as three-dimensional visualization, radiation therapy planning, and medical database construction.
Hemorrhage Detection and Segmentation in Traumatic Pelvic Injuries
Davuluri, Pavani; Wu, Jie; Tang, Yang; Cockrell, Charles H.; Ward, Kevin R.; Najarian, Kayvan; Hargraves, Rosalyn H.
2012-01-01
Automated hemorrhage detection and segmentation in traumatic pelvic injuries is vital for fast and accurate treatment decision making. Hemorrhage is the main cause of deaths in patients within first 24 hours after the injury. It is very time consuming for physicians to analyze all Computed Tomography (CT) images manually. As time is crucial in emergence medicine, analyzing medical images manually delays the decision-making process. Automated hemorrhage detection and segmentation can significantly help physicians to analyze these images and make fast and accurate decisions. Hemorrhage segmentation is a crucial step in the accurate diagnosis and treatment decision-making process. This paper presents a novel rule-based hemorrhage segmentation technique that utilizes pelvic anatomical information to segment hemorrhage accurately. An evaluation measure is used to quantify the accuracy of hemorrhage segmentation. The results show that the proposed method is able to segment hemorrhage very well, and the results are promising. PMID:22919433
NASA Astrophysics Data System (ADS)
Labovitch, Andrew
This dissertation examined electric utility CEO compensation during the years 2000 through 2011 for United States owned and operated companies. To determine the extent to which agency theory may apply to electric utility CEO compensation, this examination segmented the industry by four types of company financial metrics: revenue, earnings, stock price and the Dow Jones Utility Average; by five categories of CEO compensation: base salary, bonus, stock grants, all other compensation and total compensation; and by four categories of company size as measured by revenue: large, medium, small and the industry as a whole. Electric utility CEO compensation data was analyzed with the financial metrics to determine correlations. No type of compensation was highly correlated to any of the financial metrics for any size industry segment indicating that there was little agency. CEO compensation in large electric utility companies was higher than compensation in medium and smaller companies even though the CEOs at larger companies earned less per dollar of revenue and per dollar of earnings than their counterparts in smaller companies.
Ensemble Semi-supervised Frame-work for Brain Magnetic Resonance Imaging Tissue Segmentation
Azmi, Reza; Pishgoo, Boshra; Norozi, Narges; Yeganeh, Samira
2013-01-01
Brain magnetic resonance images (MRIs) tissue segmentation is one of the most important parts of the clinical diagnostic tools. Pixel classification methods have been frequently used in the image segmentation with two supervised and unsupervised approaches up to now. Supervised segmentation methods lead to high accuracy, but they need a large amount of labeled data, which is hard, expensive, and slow to obtain. Moreover, they cannot use unlabeled data to train classifiers. On the other hand, unsupervised segmentation methods have no prior knowledge and lead to low level of performance. However, semi-supervised learning which uses a few labeled data together with a large amount of unlabeled data causes higher accuracy with less trouble. In this paper, we propose an ensemble semi-supervised frame-work for segmenting of brain magnetic resonance imaging (MRI) tissues that it has been used results of several semi-supervised classifiers simultaneously. Selecting appropriate classifiers has a significant role in the performance of this frame-work. Hence, in this paper, we present two semi-supervised algorithms expectation filtering maximization and MCo_Training that are improved versions of semi-supervised methods expectation maximization and Co_Training and increase segmentation accuracy. Afterward, we use these improved classifiers together with graph-based semi-supervised classifier as components of the ensemble frame-work. Experimental results show that performance of segmentation in this approach is higher than both supervised methods and the individual semi-supervised classifiers. PMID:24098863
Bayesian inference of stress release models applied to some Italian seismogenic zones
NASA Astrophysics Data System (ADS)
Rotondi, R.; Varini, E.
2007-04-01
In this paper, we evaluate the seismic hazard of a region in southern Italy by analysing stress release models from the Bayesian viewpoint; the data are drawn from the most recent version of the parametric catalogue of Italian earthquakes. For estimation we just use the events up to 1992, then we forecast the date of the next event through a stochastic simulation method and we compare the result with the really occurred shocks in the span 1993-2002. The original version of the stress release model, proposed by Vere-Jones in 1978, transposes Reid's elastic rebound theory in the framework of stochastic point processes. Since the nineties enriched versions of this model have appeared in the literature, applied to historical catalogues from China, Iran, Japan; they envisage the identification of independent or interacting tectonic subunits constituting the region under exam. It follows that the stress release models, designed for regional analyses, are evolving towards studies on fault segments, realizing some degree of convergence to those models that start from an individual fault and, considering the interactions with nearby segments, are driven to studies on regional scale. The optimal performance of the models we consider depends on a set of choices among which: the seismogenic region and possible subzones, the threshold magnitude, the length of the time period. In this paper, we focus our attention on the influence of the subdivision of the region under exam into tectonic units; in the light of the recent studies on the fault segmentation model of Italy we propose a partition of Sannio-Matese-Ofanto-Irpinia, one of the most seismically active region in southern Italy. The results show that the performance of the stress release models improves in terms of both fitting and forecasting when the region is split up into parts including new information about potential seismogenic sources.
Assessment of acquired capability for suicide in clinical practice.
Rimkeviciene, Jurgita; Hawgood, Jacinta; O'Gorman, John; De Leo, Diego
2016-12-01
The Interpersonal Psychological Theory of suicide proposes that the interaction between Thwarted Belongingness, Perceived Burdensomeness, and Acquired Capability for Suicide (ACS) predicts proximal risk of death by suicide. Instruments to assess all three constructs are available. However, research on the validity of one of them, the acquired capability for suicide scale (ACSS), has been limited, especially in terms of its clinical relevance. This study aimed to explore the utility of the different versions of the ACSS in clinical assessment. Three versions of the scale were investigated, the full 20-item version, a 7-item version and a single item version representing self-perceived capability for suicide. In a sample of patients recruited from a clinic specialising in the treatment of suicidality and in a community sample, all versions of the ACSS were found to show reasonable levels of reliability and to correlate as expected with reports of suicidal ideation, self-harm, and attempted suicide. The item assessing self-perceived acquired capacity for suicide showed highest correlations with all levels of suicidal behaviour. However, no version of the ACSS on its own showed a capacity to indicate suicide attempts in the combined sample. It is concluded that the versions of the scale have construct validity, but their clinical utility is limited. An assessment using a single item on self-perceived ACS outperforms the full and shortened versions of ACSS in clinical settings and can be recommended with caution for clinicians interested in assessing this characteristic.
Segmentation of acute pyelonephritis area on kidney SPECT images using binary shape analysis
NASA Astrophysics Data System (ADS)
Wu, Chia-Hsiang; Sun, Yung-Nien; Chiu, Nan-Tsing
1999-05-01
Acute pyelonephritis is a serious disease in children that may result in irreversible renal scarring. The ability to localize the site of urinary tract infection and the extent of acute pyelonephritis has considerable clinical importance. In this paper, we are devoted to segment the acute pyelonephritis area from kidney SPECT images. A two-step algorithm is proposed. First, the original images are translated into binary versions by automatic thresholding. Then the acute pyelonephritis areas are located by finding convex deficiencies in the obtained binary images. This work gives important diagnosis information for physicians and improves the quality of medical care for children acute pyelonephritis disease.
SRB Processing Facilities Media Event
2016-03-01
Members of the news media view the high bay inside the Rotation, Processing and Surge Facility (RPSF) at NASA’s Kennedy Space Center in Florida. Inside the RPSF, engineers and technicians with Jacobs Engineering on the Test and Operations Support Contract, explain the various test stands. In the far corner is one of two pathfinders, or test versions, of solid rocket booster segments for NASA’s Space Launch System rocket. The Ground Systems Development and Operations Program and Jacobs are preparing the booster segments, which are inert, for a series of lifts, moves and stacking operations to prepare for Exploration Mission-1, deep-space missions and the journey to Mars.
Griffith, Jennifer M; Fichter, Marlie; Fowler, Floyd J; Lewis, Carmen; Pignone, Michael P
2008-01-01
Background An important question in the development of decision aids about colon cancer (CRC) screening is whether to include an explicit discussion of the option of not being screened. We examined the effect of including or not including an explicit discussion of the option of deciding not to be screened in a CRC screening decision aid on subjective measures of decision aid content; interest in screening; and knowledge. Methods Adults ages 50–85 were assigned to view one of two versions of the decision aid. The two versions differed only in the inclusion of video segments of two men, one of whom decided against being screened. Participants completed questionnaires before and after viewing the decision aid to compare subjective measures of content, screening interest and intent, and knowledge between groups. Likert response categories (5-point) were used for subjective measures of content (eg. clarity, balance in favor/against screening, and overall rating), and screening interest. Knowledge was measured with a three item index and individual questions. Higher scores indicated favorable responses for subjective measures, greater interest, and better knowledge. For the subjective balance, lower numbers were associated with the impression of the decision aid favoring CRC screening. Results 57 viewed the "with" version which included the two segments and 49 viewed the "without" version. After viewing, participants found the "without" version to have better subjective clarity about benefits of screening ("with" 3.4, "without" 4.1, p < 0.01), and to have greater clarity about downsides of screening ("with" 3.2, "without" 3.6, p = 0.03). The "with" version was considered to be less strongly balanced in favor of screening. ("with" 1.8, "without" 1.6, p = 0.05); but the "without" version received a better overall rating ("with" 3.5, "without" 3.8, p = 0.03). Groups did not differ in screening interest after viewing a decision aid or knowledge. Conclusion A decision aid with the explicit discussion of the option of deciding not to be screened appears to increase the impression that the program was not as strongly in favor of screening, but decreases the impression of clarity and resulted in a lower overall rating. We did not observe clinically important or statistically significant differences in interest in screening or knowledge. PMID:18321377
17 CFR 229.102 - (Item 102) Description of property.
Code of Federal Regulations, 2010 CFR
2010-04-01
... segment(s), as reported in the financial statements, that use the properties described. If any such... held. Instructions to Item 102: 1. What is required is such information as reasonably will inform investors as to the suitability, adequacy, productive capacity and extent of utilization of the facilities...
Health Lifestyles: Audience Segmentation Analysis for Public Health Interventions.
ERIC Educational Resources Information Center
Slater, Michael D.; Flora, June A.
This paper is concerned with the application of market research techniques to segment large populations into homogeneous units in order to improve the reach, utilization, and effectiveness of health programs. The paper identifies seven distinctive patterns of health attitudes, social influences, and behaviors using cluster analytic techniques in a…
Development of Morphophonemic Segments in Children's Mental Representations of Words.
ERIC Educational Resources Information Center
Jones, Noel K.
This study explores children's development of dual-level phonological processing posited by generative theory for adult language users. Evidence suggesting 6-year-olds' utilization of morphophonemic segments was obtained by asking children to imitate complex words, omit specified portions, and discuss the meaning of the resulting word-parts. The…
The Spectral Image Processing System (SIPS): Software for integrated analysis of AVIRIS data
NASA Technical Reports Server (NTRS)
Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.
1992-01-01
The Spectral Image Processing System (SIPS) is a software package developed by the Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, in response to a perceived need to provide integrated tools for analysis of imaging spectrometer data both spectrally and spatially. SIPS was specifically designed to deal with data from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and the High Resolution Imaging Spectrometer (HIRIS), but was tested with other datasets including the Geophysical and Environmental Research Imaging Spectrometer (GERIS), GEOSCAN images, and Landsat TM. SIPS was developed using the 'Interactive Data Language' (IDL). It takes advantage of high speed disk access and fast processors running under the UNIX operating system to provide rapid analysis of entire imaging spectrometer datasets. SIPS allows analysis of single or multiple imaging spectrometer data segments at full spatial and spectral resolution. It also allows visualization and interactive analysis of image cubes derived from quantitative analysis procedures such as absorption band characterization and spectral unmixing. SIPS consists of three modules: SIPS Utilities, SIPS_View, and SIPS Analysis. SIPS version 1.1 is described below.
NASA Technical Reports Server (NTRS)
Gupta, K. K.; Akyuz, F. A.; Heer, E.
1972-01-01
This program, an extension of the linear equilibrium problem solver ELAS, is an updated and extended version of its earlier form (written in FORTRAN 2 for the IBM 7094 computer). A synchronized material property concept utilizing incremental time steps and the finite element matrix displacement approach has been adopted for the current analysis. A special option enables employment of constant time steps in the logarithmic scale, thereby reducing computational efforts resulting from accumulative material memory effects. A wide variety of structures with elastic or viscoelastic material properties can be analyzed by VISCEL. The program is written in FORTRAN 5 language for the Univac 1108 computer operating under the EXEC 8 system. Dynamic storage allocation is automatically effected by the program, and the user may request up to 195K core memory in a 260K Univac 1108/EXEC 8 machine. The physical program VISCEL, consisting of about 7200 instructions, has four distinct links (segments), and the compiled program occupies a maximum of about 11700 words decimal of core storage.
On the Generation and Use of TCP Acknowledgments
NASA Technical Reports Server (NTRS)
Allman, Mark
1998-01-01
This paper presents a simulation study of various TCP acknowledgment generation and utilization techniques. We investigate the standard version of TCP and the two standard acknowledgment strategies employed by receivers: those that acknowledge each incoming segment and those that implement delayed acknowledgments. We show the delayed acknowledgment mechanism hurts TCP performance, especially during slow start. Next we examine three alternate mechanisms for generating and using acknowledgments designed to mitigate the negative impact of delayed acknowledgments. The first method is to generate delayed ACKs only when the sender is not using the slow start algorithm. The second mechanism, called byte counting, allows TCP senders to increase the amount of data being injected into the network based on the amount of data acknowledged rather than on the number of acknowledgments received. The last mechanism is a limited form of byte counting. Each of these mechanisms is evaluated in a simulated network with no competing traffic, as well as a dynamic environment with a varying amount of competing traffic. We study the costs and benefits of the alternate mechanisms when compared to the standard algorithm with delayed ACKs.
Glue detection based on teaching points constraint and tracking model of pixel convolution
NASA Astrophysics Data System (ADS)
Geng, Lei; Ma, Xiao; Xiao, Zhitao; Wang, Wen
2018-01-01
On-line glue detection based on machine version is significant for rust protection and strengthening in car production. Shadow stripes caused by reflect light and unevenness of inside front cover of car reduce the accuracy of glue detection. In this paper, we propose an effective algorithm to distinguish the edges of the glue and shadow stripes. Teaching points are utilized to calculate slope between the two adjacent points. Then a tracking model based on pixel convolution along motion direction is designed to segment several local rectangular regions using distance. The distance is the height of rectangular region. The pixel convolution along the motion direction is proposed to extract edges of gules in local rectangular region. A dataset with different illumination and complexity shape stripes are used to evaluate proposed method, which include 500 thousand images captured from the camera of glue gun machine. Experimental results demonstrate that the proposed method can detect the edges of glue accurately. The shadow stripes are distinguished and removed effectively. Our method achieves the 99.9% accuracies for the image dataset.
CGP lil-gp 2.1;1.02 User's Manual
NASA Technical Reports Server (NTRS)
Janikow, Cezary Z.; DeWeese, Scott W.
1997-01-01
This document describes extensions provided to lil-gp facilitating dealing with constraints. This document deals specifically with lil-gp 1.02, and the resulting extension is referred to as CGP lil-gp 2.1; 1.02 (the first version is for the extension, the second for the utilized lil-gp version). Unless explicitly needed to avoid confusion, version numbers are omitted.
In Situ 3D Segmentation of Individual Plant Leaves Using a RGB-D Camera for Agricultural Automation.
Xia, Chunlei; Wang, Longtan; Chung, Bu-Keun; Lee, Jang-Myung
2015-08-19
In this paper, we present a challenging task of 3D segmentation of individual plant leaves from occlusions in the complicated natural scene. Depth data of plant leaves is introduced to improve the robustness of plant leaf segmentation. The low cost RGB-D camera is utilized to capture depth and color image in fields. Mean shift clustering is applied to segment plant leaves in depth image. Plant leaves are extracted from the natural background by examining vegetation of the candidate segments produced by mean shift. Subsequently, individual leaves are segmented from occlusions by active contour models. Automatic initialization of the active contour models is implemented by calculating the center of divergence from the gradient vector field of depth image. The proposed segmentation scheme is tested through experiments under greenhouse conditions. The overall segmentation rate is 87.97% while segmentation rates for single and occluded leaves are 92.10% and 86.67%, respectively. Approximately half of the experimental results show segmentation rates of individual leaves higher than 90%. Nevertheless, the proposed method is able to segment individual leaves from heavy occlusions.
In Situ 3D Segmentation of Individual Plant Leaves Using a RGB-D Camera for Agricultural Automation
Xia, Chunlei; Wang, Longtan; Chung, Bu-Keun; Lee, Jang-Myung
2015-01-01
In this paper, we present a challenging task of 3D segmentation of individual plant leaves from occlusions in the complicated natural scene. Depth data of plant leaves is introduced to improve the robustness of plant leaf segmentation. The low cost RGB-D camera is utilized to capture depth and color image in fields. Mean shift clustering is applied to segment plant leaves in depth image. Plant leaves are extracted from the natural background by examining vegetation of the candidate segments produced by mean shift. Subsequently, individual leaves are segmented from occlusions by active contour models. Automatic initialization of the active contour models is implemented by calculating the center of divergence from the gradient vector field of depth image. The proposed segmentation scheme is tested through experiments under greenhouse conditions. The overall segmentation rate is 87.97% while segmentation rates for single and occluded leaves are 92.10% and 86.67%, respectively. Approximately half of the experimental results show segmentation rates of individual leaves higher than 90%. Nevertheless, the proposed method is able to segment individual leaves from heavy occlusions. PMID:26295395
Hybrid Active/Passive Jet Engine Noise Suppression System
NASA Technical Reports Server (NTRS)
Parente, C. A.; Arcas, N.; Walker, B. E.; Hersh, A. S.; Rice, E. J.
1999-01-01
A novel adaptive segmented liner concept has been developed that employs active control elements to modify the in-duct sound field to enhance the tone-suppressing performance of passive liner elements. This could potentially allow engine designs that inherently produce more tone noise but less broadband noise, or could allow passive liner designs to more optimally address high frequency broadband noise. A proof-of-concept validation program was undertaken, consisting of the development of an adaptive segmented liner that would maximize attenuation of two radial modes in a circular or annular duct. The liner consisted of a leading active segment with dual annuli of axially spaced active Helmholtz resonators, followed by an optimized passive liner and then an array of sensing microphones. Three successively complex versions of the adaptive liner were constructed and their performances tested relative to the performance of optimized uniform passive and segmented passive liners. The salient results of the tests were: The adaptive segmented liner performed well in a high flow speed model fan inlet environment, was successfully scaled to a high sound frequency and successfully attenuated three radial modes using sensor and active resonator arrays that were designed for a two mode, lower frequency environment.
NASA Technical Reports Server (NTRS)
Montgomery, Edward E., IV; Smith, W. Scott (Technical Monitor)
2002-01-01
This paper explores the history and results of the last two year's efforts to transition inductive edge sensor technology from Technology Readiness Level 2 to Technology Readiness Level 6. Both technical and programmatic challenges were overcome in the design, fabrication, test, and installation of over a thousand sensors making up the Segment Alignment Maintenance System (SAMs) for the 91 segment, 9.2-meter. Hobby Eberly Telescope (HET). The integration of these sensors with the control system will be discussed along with serendipitous leverage they provided for both initialization alignment and operational maintenance. The experience gained important insights into the fundamental motion mechanics of large segmented mirrors, the relative importance of the variance sources of misalignment errors, the efficient conduct of a program to mature the technology to the higher levels. Unanticipated factors required the team to develop new implementation strategies for the edge sensor information which enabled major segmented mirror controller design simplifications. The resulting increase in the science efficiency of HET will be shown. Finally, the on-going effort to complete the maturation of inductive edge sensor by delivering space qualified versions for future IR (infrared radiation) space telescopes.
A Scalable Framework For Segmenting Magnetic Resonance Images
Hore, Prodip; Goldgof, Dmitry B.; Gu, Yuhua; Maudsley, Andrew A.; Darkazanli, Ammar
2009-01-01
A fast, accurate and fully automatic method of segmenting magnetic resonance images of the human brain is introduced. The approach scales well allowing fast segmentations of fine resolution images. The approach is based on modifications of the soft clustering algorithm, fuzzy c-means, that enable it to scale to large data sets. Two types of modifications to create incremental versions of fuzzy c-means are discussed. They are much faster when compared to fuzzy c-means for medium to extremely large data sets because they work on successive subsets of the data. They are comparable in quality to application of fuzzy c-means to all of the data. The clustering algorithms coupled with inhomogeneity correction and smoothing are used to create a framework for automatically segmenting magnetic resonance images of the human brain. The framework is applied to a set of normal human brain volumes acquired from different magnetic resonance scanners using different head coils, acquisition parameters and field strengths. Results are compared to those from two widely used magnetic resonance image segmentation programs, Statistical Parametric Mapping and the FMRIB Software Library (FSL). The results are comparable to FSL while providing significant speed-up and better scalability to larger volumes of data. PMID:20046893
Shirazinodeh, Alireza; Noubari, Hossein Ahmadi; Rabbani, Hossein; Dehnavi, Alireza Mehri
2015-01-01
Recent studies on wavelet transform and fractal modeling applied on mammograms for the detection of cancerous tissues indicate that microcalcifications and masses can be utilized for the study of the morphology and diagnosis of cancerous cases. It is shown that the use of fractal modeling, as applied to a given image, can clearly discern cancerous zones from noncancerous areas. In this paper, for fractal modeling, the original image is first segmented into appropriate fractal boxes followed by identifying the fractal dimension of each windowed section using a computationally efficient two-dimensional box-counting algorithm. Furthermore, using appropriate wavelet sub-bands and image Reconstruction based on modified wavelet coefficients, it is shown that it is possible to arrive at enhanced features for detection of cancerous zones. In this paper, we have attempted to benefit from the advantages of both fractals and wavelets by introducing a new algorithm. By using a new algorithm named F1W2, the original image is first segmented into appropriate fractal boxes, and the fractal dimension of each windowed section is extracted. Following from that, by applying a maximum level threshold on fractal dimensions matrix, the best-segmented boxes are selected. In the next step, the segmented Cancerous zones which are candidates are then decomposed by utilizing standard orthogonal wavelet transform and db2 wavelet in three different resolution levels, and after nullifying wavelet coefficients of the image at the first scale and low frequency band of the third scale, the modified reconstructed image is successfully utilized for detection of breast cancer regions by applying an appropriate threshold. For detection of cancerous zones, our simulations indicate the accuracy of 90.9% for masses and 88.99% for microcalcifications detection results using the F1W2 method. For classification of detected mictocalcification into benign and malignant cases, eight features are identified and utilized in radial basis function neural network. Our simulation results indicate the accuracy of 92% classification using F1W2 method.
Evaluation of a segment-based LANDSAT full-frame approach to corp area estimation
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Hixson, M. M.; Davis, S. M.
1981-01-01
As the registration of LANDSAT full frames enters the realm of current technology, sampling methods should be examined which utilize other than the segment data used for LACIE. The effect of separating the functions of sampling for training and sampling for area estimation. The frame selected for analysis was acquired over north central Iowa on August 9, 1978. A stratification of he full-frame was defined. Training data came from segments within the frame. Two classification and estimation procedures were compared: statistics developed on one segment were used to classify that segment, and pooled statistics from the segments were used to classify a systematic sample of pixels. Comparisons to USDA/ESCS estimates illustrate that the full-frame sampling approach can provide accurate and precise area estimates.
Estimation and enhancement of real-time software reliability through mutation analysis
NASA Technical Reports Server (NTRS)
Geist, Robert; Offutt, A. J.; Harris, Frederick C., Jr.
1992-01-01
A simulation-based technique for obtaining numerical estimates of the reliability of N-version, real-time software is presented. An extended stochastic Petri net is employed to represent the synchronization structure of N versions of the software, where dependencies among versions are modeled through correlated sampling of module execution times. Test results utilizing specifications for NASA's planetary lander control software indicate that mutation-based testing could hold greater potential for enhancing reliability than the desirable but perhaps unachievable goal of independence among N versions.
Impact of Noise Reduction Algorithm in Cochlear Implant Processing on Music Enjoyment.
Kohlberg, Gavriel D; Mancuso, Dean M; Griffin, Brianna M; Spitzer, Jaclyn B; Lalwani, Anil K
2016-06-01
Noise reduction algorithm (NRA) in speech processing strategy has positive impact on speech perception among cochlear implant (CI) listeners. We sought to evaluate the effect of NRA on music enjoyment. Prospective analysis of music enjoyment. Academic medical center. Normal-hearing (NH) adults (N = 16) and CI listeners (N = 9). Subjective rating of music excerpts. NH and CI listeners evaluated country music piece on three enjoyment modalities: pleasantness, musicality, and naturalness. Participants listened to the original version and 20 modified, less complex versions created by including subsets of musical instruments from the original song. NH participants listened to the segments through CI simulation and CI listeners listened to the segments with their usual speech processing strategy, with and without NRA. Decreasing the number of instruments was significantly associated with increase in the pleasantness and naturalness in both NH and CI subjects (p < 0.05). However, there was no difference in music enjoyment with or without NRA for either NH listeners with CI simulation or CI listeners across all three modalities of pleasantness, musicality, and naturalness (p > 0.05): this was true for the original and the modified music segments with one to three instruments (p > 0.05). NRA does not affect music enjoyment in CI listener or NH individual with CI simulation. This suggests that strategies to enhance speech processing will not necessarily have a positive impact on music enjoyment. However, reducing the complexity of music shows promise in enhancing music enjoyment and should be further explored.
Russian Earth Science Research Program on ISS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armand, N. A.; Tishchenko, Yu. G.
1999-01-22
Version of the Russian Earth Science Research Program on the Russian segment of ISS is proposed. The favorite tasks are selected, which may be solved with the use of space remote sensing methods and tools and which are worthwhile for realization. For solving these tasks the specialized device sets (submodules), corresponding to the specific of solved tasks, are working out. They would be specialized modules, transported to the ISS. Earth remote sensing research and ecological monitoring (high rates and large bodies transmitted from spaceborne information, comparatively stringent requirements to the period of its processing, etc.) cause rather high requirements tomore » the ground segment of receiving, processing, storing, and distribution of space information in the interests of the Earth natural resources investigation. Creation of the ground segment has required the development of the interdepartmental data receiving and processing center. Main directions of works within the framework of the ISS program are determined.« less
Aging and perceived event structure as a function of modality
Magliano, Joseph; Kopp, Kristopher; McNerney, M. Windy; Radvansky, Gabriel A.; Zacks, Jeffrey M.
2012-01-01
The majority of research on situation model processing in older adults has focused on narrative texts. Much of this research has shown that many important aspects of constructing a situation model for a text are preserved and may even improve with age. However, narratives need not be text-based, and little is known as to whether these findings generalize to visually-based narratives. The present study assessed the impact of story modality on event segmentation, which is a basic component of event comprehension. Older and younger adults viewed picture stories or read text versions of them and segmented them into events. There was comparable alignment between the segmentation judgments and a theoretically guided analysis of shifts in situational features across modalities for both populations. These results suggest that situation models provide older adults with a stable basis for event comprehension across different modalities of expereinces. PMID:22182344
Sharma, Shilpa; Mehta, Puja K; Arsanjani, Reza; Sedlak, Tara; Hobel, Zachary; Shufelt, Chrisandra; Jones, Erika; Kligfield, Paul; Mortara, David; Laks, Michael; Diniz, Marcio; Bairey Merz, C Noel
2018-06-19
The utility of exercise-induced ST-segment depression for diagnosing ischemic heart disease (IHD) in women is unclear. Based on evidence that IHD pathophysiology in women involves coronary vascular dysfunction, we hypothesized that coronary vascular dysfunction contributes to exercise electrocardiography (Ex-ECG) ST-depression in the absence of obstructive CAD, so-called "false positive" results. We tested our hypothesis in a pilot study evaluating the relationship between peripheral vascular endothelial function and Ex-ECG. Twenty-nine asymptomatic women without cardiac risk factors underwent maximal Bruce protocol exercise treadmill testing and peripheral endothelial function assessment using peripheral arterial tonometry (Itamar EndoPAT 2000) to measure reactive hyperemia index (RHI). The relationship between RHI and Ex-ECG ST-segment depression was evaluated using logistic regression and differences in subgroups using two-tailed t-tests. Mean age was 54 ± 7 years, body mass index 25 ± 4 kg/m 2 , and RHI 2.51 ± 0.66. Three women (10%) had RHI less than 1.68, consistent with abnormal peripheral endothelial function, while 18 women (62%) met criteria for a positive Ex-ECG based on ST-segment depression in contiguous leads. Women with and without ST-segment depression had similar baseline and exercise vital signs, metabolic equivalents (METS) achieved, and RHI (all p>0.05). RHI did not predict ST-segment depression. Our pilot study demonstrates a high prevalence of exercise-induced ST-segment depression in asymptomatic, middle-aged, overweight women. Peripheral vascular endothelial dysfunction did not predict Ex-ECG ST-segment depression. Further work is needed to investigate the utility of vascular endothelial testing and Ex-ECG for IHD diagnostic and management purposes in women. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
ERIC Educational Resources Information Center
Gerber, Ben; Smith, Everett V., Jr.; Girotti, Mariela; Pelaez, Lourdes; Lawless, Kimberly; Smolin, Louanne; Brodsky, Irwin; Eiser, Arnold
2002-01-01
Used Rasch measurement to study the psychometric properties of data obtained from a newly developed Diabetes Questionnaire designed to measure diabetes knowledge, attitudes, and self-care. Responses of 26 diabetes patients to the English version of the questionnaire and 24 patients to the Spanish version support the cross-form equivalence and…
Design and Optimization of the SPOT Primary Mirror Segment
NASA Technical Reports Server (NTRS)
Budinoff, Jason G.; Michaels, Gregory J.
2005-01-01
The 3m Spherical Primary Optical Telescope (SPOT) will utilize a single ring of 0.86111 point-to-point hexagonal mirror segments. The f2.85 spherical mirror blanks will be fabricated by the same replication process used for mass-produced commercial telescope mirrors. Diffraction-limited phasing will require segment-to-segment radius of curvature (ROC) variation of approx.1 micron. Low-cost, replicated segment ROC variations are estimated to be almost 1 mm, necessitating a method for segment ROC adjustment & matching. A mechanical architecture has been designed that allows segment ROC to be adjusted up to 400 microns while introducing a minimum figure error, allowing segment-to-segment ROC matching. A key feature of the architecture is the unique back profile of the mirror segments. The back profile of the mirror was developed with shape optimization in MSC.Nastran(TradeMark) using optical performance response equations written with SigFit. A candidate back profile was generated which minimized ROC-adjustment-induced surface error while meeting the constraints imposed by the fabrication method. Keywords: optimization, radius of curvature, Pyrex spherical mirror, Sigfit
Modelling noise propagation using Grid Resources. Progress within GDI-Grid
NASA Astrophysics Data System (ADS)
Kiehle, Christian; Mayer, Christian; Padberg, Alexander; Stapelfeld, Hartmut
2010-05-01
Modelling noise propagation using Grid Resources. Progress within GDI-Grid. GDI-Grid (english: SDI-Grid) is a research project funded by the German Ministry for Science and Education (BMBF). It aims at bridging the gaps between OGC Web Services (OWS) and Grid infrastructures and identifying the potential of utilizing the superior storage capacities and computational power of grid infrastructures for geospatial applications while keeping the well-known service interfaces specified by the OGC. The project considers all major OGC webservice interfaces for Web Mapping (WMS), Feature access (Web Feature Service), Coverage access (Web Coverage Service) and processing (Web Processing Service). The major challenge within GDI-Grid is the harmonization of diverging standards as defined by standardization bodies for Grid computing and spatial information exchange. The project started in 2007 and will continue until June 2010. The concept for the gridification of OWS developed by lat/lon GmbH and the Department of Geography of the University of Bonn is applied to three real-world scenarios in order to check its practicability: a flood simulation, a scenario for emergency routing and a noise propagation simulation. The latter scenario is addressed by the Stapelfeldt Ingenieurgesellschaft mbH located in Dortmund adapting their LimA software to utilize grid resources. Noise mapping of e.g. traffic noise in urban agglomerates and along major trunk roads is a reoccurring demand of the EU Noise Directive. Input data requires road net and traffic, terrain, buildings and noise protection screens as well as population distribution. Noise impact levels are generally calculated in 10 m grid and along relevant building facades. For each receiver position sources within a typical range of 2000 m are split down into small segments, depending on local geometry. For each of the segments propagation analysis includes diffraction effects caused by all obstacles on the path of sound propagation. This immense intensive calculation needs to be performed for a major part of European landscape. A LINUX version of the commercial LimA software for noise mapping analysis has been implemented on a test cluster within the German D-GRID computer network. Results and performance indicators will be presented. The presentation is an extension to last-years presentation "Spatial Data Infrastructures and Grid Computing: the GDI-Grid project" that described the gridification concept developed in the GDI-Grid project and provided an overview of the conceptual gaps between Grid Computing and Spatial Data Infrastructures. Results from the GDI-Grid project are incorporated in the OGC-OGF (Open Grid Forum) collaboration efforts as well as the OGC WPS 2.0 standards working group developing the next major version of the WPS specification.
Finding a good segmentation strategy for tree crown transparency estimation
Neil A. Clark; Sang-Mook Lee; Philip A. Araman
2003-01-01
Image segmentation is a general term for delineating image areas into informational categories. A wide variety of general techniques exist depending on application and the image data specifications. Specialized algorithms, utilizing components of several techniques, usually are needed to meet the rigors for a specific application. This paper considers automated color...
Market segmentation and positioning: matching creativity with fiscal responsibility.
Kiener, M E
1989-01-01
This paper describes an approach to continuing professional education (CPE) program development in nursing within a university environment that utilizes the concepts of market segmentation and positioning. Use of these strategies enables the academic CPE enterprise to move beyond traditional needs assessment practices to create more successful and better-managed CPE programs.
PTBS segmentation scheme for synthetic aperture radar
NASA Astrophysics Data System (ADS)
Friedland, Noah S.; Rothwell, Brian J.
1995-07-01
The Image Understanding Group at Martin Marietta Technologies in Denver, Colorado has developed a model-based synthetic aperture radar (SAR) automatic target recognition (ATR) system using an integrated resource architecture (IRA). IRA, an adaptive Markov random field (MRF) environment, utilizes information from image, model, and neighborhood resources to create a discrete, 2D feature-based world description (FBWD). The IRA FBWD features are peak, target, background and shadow (PTBS). These features have been shown to be very useful for target discrimination. The FBWD is used to accrue evidence over a model hypothesis set. This paper presents the PTBS segmentation process utilizing two IRA resources. The image resource (IR) provides generic (the physics of image formation) and specific (the given image input) information. The neighborhood resource (NR) provides domain knowledge of localized FBWD site behaviors. A simulated annealing optimization algorithm is used to construct a `most likely' PTBS state. Results on simulated imagery illustrate the power of this technique to correctly segment PTBS features, even when vehicle signatures are immersed in heavy background clutter. These segmentations also suppress sidelobe effects and delineate shadows.
Modeling and clustering water demand patterns from real-world smart meter data
NASA Astrophysics Data System (ADS)
Cheifetz, Nicolas; Noumir, Zineb; Samé, Allou; Sandraz, Anne-Claire; Féliers, Cédric; Heim, Véronique
2017-08-01
Nowadays, drinking water utilities need an acute comprehension of the water demand on their distribution network, in order to efficiently operate the optimization of resources, manage billing and propose new customer services. With the emergence of smart grids, based on automated meter reading (AMR), a better understanding of the consumption modes is now accessible for smart cities with more granularities. In this context, this paper evaluates a novel methodology for identifying relevant usage profiles from the water consumption data produced by smart meters. The methodology is fully data-driven using the consumption time series which are seen as functions or curves observed with an hourly time step. First, a Fourier-based additive time series decomposition model is introduced to extract seasonal patterns from time series. These patterns are intended to represent the customer habits in terms of water consumption. Two functional clustering approaches are then used to classify the extracted seasonal patterns: the functional version of K-means, and the Fourier REgression Mixture (FReMix) model. The K-means approach produces a hard segmentation and K representative prototypes. On the other hand, the FReMix is a generative model and also produces K profiles as well as a soft segmentation based on the posterior probabilities. The proposed approach is applied to a smart grid deployed on the largest water distribution network (WDN) in France. The two clustering strategies are evaluated and compared. Finally, a realistic interpretation of the consumption habits is given for each cluster. The extensive experiments and the qualitative interpretation of the resulting clusters allow one to highlight the effectiveness of the proposed methodology.
Hessian-based quantitative image analysis of host-pathogen confrontation assays.
Cseresnyes, Zoltan; Kraibooj, Kaswara; Figge, Marc Thilo
2018-03-01
Host-fungus interactions have gained a lot of interest in the past few decades, mainly due to an increasing number of fungal infections that are often associated with a high mortality rate in the absence of effective therapies. These interactions can be studied at the genetic level or at the functional level via imaging. Here, we introduce a new image processing method that quantifies the interaction between host cells and fungal invaders, for example, alveolar macrophages and the conidia of Aspergillus fumigatus. The new technique relies on the information content of transmitted light bright field microscopy images, utilizing the Hessian matrix eigenvalues to distinguish between unstained macrophages and the background, as well as between macrophages and fungal conidia. The performance of the new algorithm was measured by comparing the results of our method with that of an alternative approach that was based on fluorescence images from the same dataset. The comparison shows that the new algorithm performs very similarly to the fluorescence-based version. Consequently, the new algorithm is able to segment and characterize unlabeled cells, thus reducing the time and expense that would be spent on the fluorescent labeling in preparation for phagocytosis assays. By extending the proposed method to the label-free segmentation of fungal conidia, we will be able to reduce the need for fluorescence-based imaging even further. Our approach should thus help to minimize the possible side effects of fluorescence labeling on biological functions. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
Zhang, Junhua; Wang, Yuanyuan; Shi, Xinling
2009-12-01
A modified graph cut was proposed under the elliptical shape constraint to segment cervical lymph nodes on sonograms, and its effect on the measurement of short axis to long axis ratio (S/L) was investigated by using the relative ultimate measurement accuracy (RUMA). Under the same user inputs, the proposed algorithm successfully segmented all 60 sonograms tested, while the traditional graph cut failed. The mean RUMA resulted from the developed method was comparable to that resulted from the manual segmentation. Results indicated that utilizing the elliptical shape prior could appreciably improve the graph cut for nodes segmentation, and the proposed method satisfied the accuracy requirement of S/L measurement.
Segmentation of breast ultrasound images based on active contours using neutrosophic theory.
Lotfollahi, Mahsa; Gity, Masoumeh; Ye, Jing Yong; Mahlooji Far, A
2018-04-01
Ultrasound imaging is an effective approach for diagnosing breast cancer, but it is highly operator-dependent. Recent advances in computer-aided diagnosis have suggested that it can assist physicians in diagnosis. Definition of the region of interest before computer analysis is still needed. Since manual outlining of the tumor contour is tedious and time-consuming for a physician, developing an automatic segmentation method is important for clinical application. The present paper represents a novel method to segment breast ultrasound images. It utilizes a combination of region-based active contour and neutrosophic theory to overcome the natural properties of ultrasound images including speckle noise and tissue-related textures. First, due to inherent speckle noise and low contrast of these images, we have utilized a non-local means filter and fuzzy logic method for denoising and image enhancement, respectively. This paper presents an improved weighted region-scalable active contour to segment breast ultrasound images using a new feature derived from neutrosophic theory. This method has been applied to 36 breast ultrasound images. It generates true-positive and false-positive results, and similarity of 95%, 6%, and 90%, respectively. The purposed method indicates clear advantages over other conventional methods of active contour segmentation, i.e., region-scalable fitting energy and weighted region-scalable fitting energy.
Exploring Shakespeare through the Cinematic Image: Seeing "Hamlet."
ERIC Educational Resources Information Center
Felter, Douglas P.
1993-01-01
Describes an innovative approach to teaching William Shakespeare's "Hamlet" utilizing various film versions of the play. Outlines a method of showing several versions of the same scene from different film adaptations. Describes student reaction to the variations among the different films. (HB)
An improved method for pancreas segmentation using SLIC and interactive region merging
NASA Astrophysics Data System (ADS)
Zhang, Liyuan; Yang, Huamin; Shi, Weili; Miao, Yu; Li, Qingliang; He, Fei; He, Wei; Li, Yanfang; Zhang, Huimao; Mori, Kensaku; Jiang, Zhengang
2017-03-01
Considering the weak edges in pancreas segmentation, this paper proposes a new solution which integrates more features of CT images by combining SLIC superpixels and interactive region merging. In the proposed method, Mahalanobis distance is first utilized in SLIC method to generate better superpixel images. By extracting five texture features and one gray feature, the similarity measure between two superpixels becomes more reliable in interactive region merging. Furthermore, object edge blocks are accurately addressed by re-segmentation merging process. Applying the proposed method to four cases of abdominal CT images, we segment pancreatic tissues to verify the feasibility and effectiveness. The experimental results show that the proposed method can make segmentation accuracy increase to 92% on average. This study will boost the application process of pancreas segmentation for computer-aided diagnosis system.
Yamamoto, Michiko; Doi, Hirohisa; Yamamoto, Ken; Watanabe, Kazuhiro; Sato, Tsugumichi; Suka, Machi; Nakayama, Takeo; Sugimori, Hiroki
2017-01-01
The safe use of drugs relies on providing accurate drug information to patients. In Japan, patient leaflets called Drug Guide for Patients are officially available; however, their utility has never been verified. This is the first attempt to improve Drug Guide for Patients via user testing in Japan. To test and improve communication of drug information to minimize risk for patients via user testing of the current and revised versions of Drug Guide for Patients, and to demonstrate that this method is effective for improving Drug Guide for Patients in Japan. We prepared current and revised versions of the Drug Guide for Patients and performed user testing via semi-structured interviews with consumers to compare these versions for two guides for Mercazole and Strattera. We evenly divided 54 participants into two groups with similar distributions of sex, age, and literacy level to test the differing versions of the Mercazole guide. Another group of 30 participants were divided evenly to test the versions of the Strattera guide. After completing user testing, the participants evaluated both guides in terms of amount of information, readability, usefulness of information, and layout and appearance. Participants were also asked for their opinions on the leaflets. Response rates were 100% for both Mercazole and Strattera. The revised versions of both Guides were superior or equal to the current versions in terms of accessibility and understandability. The revised version of the Mercazole guide showed better ratings for readability, usefulness of information, and layout ( p <0.01) than did the current version, while that for Strattera showed superior readability and layout ( p <0.01). User testing was effective for evaluating the utility of Drug Guide for Patients. Additionally, the revised version had superior accessibility and understandability.
SF-6D population norms for the Hong Kong Chinese general population.
Wong, Carlos K H; Mulhern, Brendan; Cheng, Garvin H L; Lam, Cindy L K
2018-05-24
To estimate population norms for the SF-6D health preference (utility) scores derived from the MOS SF-36 version 1 (SF-36v1), SF-36 version 2 (SF-36v2), and (SF-12v2) health surveys collected from a representative adult sample in Hong Kong, and to assess differences in SF-6D scores across sociodemographic subgroups. A random telephone survey of 2410 Chinese adults was conducted. All respondents completed questionnaires on sociodemographics and presence of chronic diseases (hypertension, diabetes, chronic rheumatism, chronic lung diseases, stroke, and mental illness), and the short-form 36-item health survey (SF-36) version 1, and selected items of the SF-36v2 that were different from those of SF-36v1. Responses of short-form 12-item health survey (SF-12) were extracted from responses of the SF-36 items. SF-6D health utility scores were derived from SF-36 version 1 (SF-6D SF-36v1 ), SF-36 version 2 (SF-6D SF-36v2 ), and SF-12 version 2 (SF-6D SF-12v2 ) using Hong Kong SF-6D value set. Population norms of SF-6D SF-36v1 , SF-6D SF-36v2 , and SF-6D SF-12v2 for the Hong Kong Chinese were 0.7947 (± 0.0048), 0.7862 (± 0.0049), and 0.8147 (± 0.0050), respectively. Three SF-6D scores were highly correlated (0.861-0.954), and had a high degree of reliability and absolute agreement. Males had higher health utility scores (SF-6D SF-36v1 : 0.0025; SF-6D SF-36v2 : 0.025; SF-6D SF-12v2 : 0.018) but reported less problems in all the dimensions than women. Respondents with a higher number of chronic diseases had lower SF-6D scores. Among all respondents with one or more chronic diseases, those with hypertension scored the highest whereby those with mental illness scored the least. The SF-6D utility scores derived from different SF-36 or SF-12 health surveys were different. The population norms based on these three health surveys enable the normative comparisons of health utility scores from specific population or patient groups, and provide estimates of age-gender adjusted health utility scores for health economic evaluations.
Eliciting affect via immersive virtual reality: a tool for adolescent risk reduction.
Hadley, Wendy; Houck, Christopher D; Barker, David H; Garcia, Abbe Marrs; Spitalnick, Josh S; Curtis, Virginia; Roye, Scott; Brown, Larry K
2014-04-01
A virtual reality environment (VRE) was designed to expose participants to substance use and sexual risk-taking cues to examine the utility of VR in eliciting adolescent physiological arousal. 42 adolescents (55% male) with a mean age of 14.54 years (SD = 1.13) participated. Physiological arousal was examined through heart rate (HR), respiratory sinus arrhythmia (RSA), and self-reported somatic arousal. A within-subject design (neutral VRE, VR party, and neutral VRE) was utilized to examine changes in arousal. The VR party demonstrated an increase in physiological arousal relative to a neutral VRE. Examination of individual segments of the party (e.g., orientation, substance use, and sexual risk) demonstrated that HR was significantly elevated across all segments, whereas only the orientation and sexual risk segments demonstrated significant impact on RSA. This study provides preliminary evidence that VREs can be used to generate physiological arousal in response to substance use and sexual risk cues.
Hewner, Sharon; Casucci, Sabrina; Castner, Jessica
2016-08-01
Economically disadvantaged individuals with chronic disease have high rates of in-patient (IP) readmission and emergency department (ED) utilization following initial hospitalization. The purpose of this study was to explore the relationships between chronic disease complexity, health system integration (admission to accountable care organization [ACO] hospital), availability of care management interventions (membership in managed care organization [MCO]), and 90-day post-discharge healthcare utilization. We used de-identified Medicaid claims data from two counties in western New York. The study population was 114,295 individuals who met inclusion criteria, of whom 7,179 had index hospital admissions in the first 9 months of 2013. Individuals were assigned to three disease complexity segments based on presence of 12 prevalent conditions. The 30-day inpatient (IP) readmission rates ranged from 6% in the non-chronic segment to 12% in the chronic disease complexity segment and 21% in the organ system failure complexity segment. Rehospitalization rates (both inpatient and emergency department [ED]) were lower for patients in MCOs and ACOs than for those in fee-for-service care. Complexity of chronic disease, initial hospitalization in a facility that was part of an ACO, MCO membership, female gender, and longer length of stay were associated with a significantly longer time to readmission in the first 90 days, that is, fewer readmissions. Our results add to evidence that high-value post-discharge utilization (fewer IP or ED rehospitalizations and early outpatient follow-up) require population-based transitional care strategies that improve continuity between settings and take into account the illness complexity of the Medicaid population. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
The report gives results of activities relating to the Advanced Utility Simulation Model (AUSM): sensitivity testing. comparison with a mature electric utility model, and calibration to historical emissions. The activities were aimed at demonstrating AUSM's validity over input va...
Bragman, Felix J.S.; McClelland, Jamie R.; Jacob, Joseph; Hurst, John R.; Hawkes, David J.
2017-01-01
A fully automated, unsupervised lobe segmentation algorithm is presented based on a probabilistic segmentation of the fissures and the simultaneous construction of a population model of the fissures. A two-class probabilistic segmentation segments the lung into candidate fissure voxels and the surrounding parenchyma. This was combined with anatomical information and a groupwise fissure prior to drive non-parametric surface fitting to obtain the final segmentation. The performance of our fissure segmentation was validated on 30 patients from the COPDGene cohort, achieving a high median F1-score of 0.90 and showed general insensitivity to filter parameters. We evaluated our lobe segmentation algorithm on the LOLA11 dataset, which contains 55 cases at varying levels of pathology. We achieved the highest score of 0.884 of the automated algorithms. Our method was further tested quantitatively and qualitatively on 80 patients from the COPDGene study at varying levels of functional impairment. Accurate segmentation of the lobes is shown at various degrees of fissure incompleteness for 96% of all cases. We also show the utility of including a groupwise prior in segmenting the lobes in regions of grossly incomplete fissures. PMID:28436850
Exploration of sequence space as the basis of viral RNA genome segmentation.
Moreno, Elena; Ojosnegros, Samuel; García-Arriaza, Juan; Escarmís, Cristina; Domingo, Esteban; Perales, Celia
2014-05-06
The mechanisms of viral RNA genome segmentation are unknown. On extensive passage of foot-and-mouth disease virus in baby hamster kidney-21 cells, the virus accumulated multiple point mutations and underwent a transition akin to genome segmentation. The standard single RNA genome molecule was replaced by genomes harboring internal in-frame deletions affecting the L- or capsid-coding region. These genomes were infectious and killed cells by complementation. Here we show that the point mutations in the nonstructural protein-coding region (P2, P3) that accumulated in the standard genome before segmentation increased the relative fitness of the segmented version relative to the standard genome. Fitness increase was documented by intracellular expression of virus-coded proteins and infectious progeny production by RNAs with the internal deletions placed in the sequence context of the parental and evolved genome. The complementation activity involved several viral proteins, one of them being the leader proteinase L. Thus, a history of genetic drift with accumulation of point mutations was needed to allow a major variation in the structure of a viral genome. Thus, exploration of sequence space by a viral genome (in this case an unsegmented RNA) can reach a point of the space in which a totally different genome structure (in this case, a segmented RNA) is favored over the form that performed the exploration.
Iglesias, Juan Eugenio; Augustinack, Jean C; Nguyen, Khoa; Player, Christopher M; Player, Allison; Wright, Michelle; Roy, Nicole; Frosch, Matthew P; McKee, Ann C; Wald, Lawrence L; Fischl, Bruce; Van Leemput, Koen
2015-07-15
Automated analysis of MRI data of the subregions of the hippocampus requires computational atlases built at a higher resolution than those that are typically used in current neuroimaging studies. Here we describe the construction of a statistical atlas of the hippocampal formation at the subregion level using ultra-high resolution, ex vivo MRI. Fifteen autopsy samples were scanned at 0.13 mm isotropic resolution (on average) using customized hardware. The images were manually segmented into 13 different hippocampal substructures using a protocol specifically designed for this study; precise delineations were made possible by the extraordinary resolution of the scans. In addition to the subregions, manual annotations for neighboring structures (e.g., amygdala, cortex) were obtained from a separate dataset of in vivo, T1-weighted MRI scans of the whole brain (1mm resolution). The manual labels from the in vivo and ex vivo data were combined into a single computational atlas of the hippocampal formation with a novel atlas building algorithm based on Bayesian inference. The resulting atlas can be used to automatically segment the hippocampal subregions in structural MRI images, using an algorithm that can analyze multimodal data and adapt to variations in MRI contrast due to differences in acquisition hardware or pulse sequences. The applicability of the atlas, which we are releasing as part of FreeSurfer (version 6.0), is demonstrated with experiments on three different publicly available datasets with different types of MRI contrast. The results show that the atlas and companion segmentation method: 1) can segment T1 and T2 images, as well as their combination, 2) replicate findings on mild cognitive impairment based on high-resolution T2 data, and 3) can discriminate between Alzheimer's disease subjects and elderly controls with 88% accuracy in standard resolution (1mm) T1 data, significantly outperforming the atlas in FreeSurfer version 5.3 (86% accuracy) and classification based on whole hippocampal volume (82% accuracy). Copyright © 2015. Published by Elsevier Inc.
Placental fetal stem segmentation in a sequence of histology images
NASA Astrophysics Data System (ADS)
Athavale, Prashant; Vese, Luminita A.
2012-02-01
Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental fetal stems. Analysis of the fetal stems in a placenta could be useful in the study and diagnosis of some diseases like autism. To study the fetal stem structure effectively, we need to automatically and accurately track fetal stems through a sequence of digitized hematoxylin and eosin (H&E) stained histology slides. There are many problems in successfully achieving this goal. A few of the problems are: large size of images, misalignment of the consecutive H&E slides, unpredictable inaccuracies of manual tracing, very complicated texture patterns of various tissue types without clear characteristics, just to name a few. In this paper we propose a novel algorithm to achieve automatic tracing of the fetal stem in a sequence of H&E images, based on an inaccurate manual segmentation of a fetal stem in one of the images. This algorithm combines global affine registration, local non-affine registration and a novel 'dynamic' version of the active contours model without edges. We first use global affine image registration of all the images based on displacement, scaling and rotation. This gives us approximate location of the corresponding fetal stem in the image that needs to be traced. We then use the affine registration algorithm "locally" near this location. At this point, we use a fast non-affine registration based on L2-similarity measure and diffusion regularization to get a better location of the fetal stem. Finally, we have to take into account inaccuracies in the initial tracing. This is achieved through a novel dynamic version of the active contours model without edges where the coefficients of the fitting terms are computed iteratively to ensure that we obtain a unique stem in the segmentation. The segmentation thus obtained can then be used as an initial guess to obtain segmentation in the rest of the images in the sequence. This constitutes an important step in the extraction and understanding of the fetal stem vasculature.
Rotation invariant eigenvessels and auto-context for retinal vessel detection
NASA Astrophysics Data System (ADS)
Montuoro, Alessio; Simader, Christian; Langs, Georg; Schmidt-Erfurth, Ursula
2015-03-01
Retinal vessels are one of the few anatomical landmarks that are clearly visible in various imaging modalities of the eye. As they are also relatively invariant to disease progression, retinal vessel segmentation allows cross-modal and temporal registration enabling exact diagnosing for various eye diseases like diabetic retinopathy, hypertensive retinopathy or age-related macular degeneration (AMD). Due to the clinical significance of retinal vessels many different approaches for segmentation have been published in the literature. In contrast to other segmentation approaches our method is not specifically tailored to the task of retinal vessel segmentation. Instead we utilize a more general image classification approach and show that this can achieve comparable results. In the proposed method we utilize the concepts of eigenfaces and auto-context. Eigenfaces have been described quite extensively in the literature and their performance is well known. They are however quite sensitive to translation and rotation. The former was addressed by computing the eigenvessels in local image windows of different scales, the latter by estimating and correcting the local orientation. Auto-context aims to incorporate automatically generated context information into the training phase of classification approaches. It has been shown to improve the performance of spinal cord segmentation4 and 3D brain image segmentation. The proposed method achieves an area under the receiver operating characteristic (ROC) curve of Az = 0.941 on the DRIVE data set, being comparable to current state-of-the-art approaches.
NETPATH-WIN: an interactive user version of the mass-balance model, NETPATH
El-Kadi, A. I.; Plummer, Niel; Aggarwal, P.
2011-01-01
NETPATH-WIN is an interactive user version of NETPATH, an inverse geochemical modeling code used to find mass-balance reaction models that are consistent with the observed chemical and isotopic composition of waters from aquatic systems. NETPATH-WIN was constructed to migrate NETPATH applications into the Microsoft WINDOWS® environment. The new version facilitates model utilization by eliminating difficulties in data preparation and results analysis of the DOS version of NETPATH, while preserving all of the capabilities of the original version. Through example applications, the note describes some of the features of NETPATH-WIN as applied to adjustment of radiocarbon data for geochemical reactions in groundwater systems.
ERIC Educational Resources Information Center
Fong, Soon Fook
2013-01-01
This study investigated the effects of segmented animated graphics utilized to facilitate learning of electrolysis of aqueous solution. A total of 171 Secondary Four chemistry students with two different spatial ability levels were randomly assigned to one of the experimental conditions: (a) text with multiple static graphics (MSG), (b) text with…
ERIC Educational Resources Information Center
Van Epps, Daniel L.
2013-01-01
Expanded telecommunications was deemed a serious need for end users. The "Local Market" and "Last Mile" market segments have largely consolidated into "natural utilities". Competition and access problems occur if new providers enter the local market and desire competitive access and service to end users. Local and…
Market segmentation and analysis of Japan's residential post and beam construction market.
Joseph A. Roos; Ivan L. Eastin; Hisaaki Matsuguma
2005-01-01
A mail survey of Japanese post and beam builders was conducted to measure their level of ethnocentrism, market orientation, risk aversion, and price consciousness. The data were analyzed utilizing factor and cluster analysis. The results showed that Japanese post and beam builders can be divided into three distinct market segments: open to import...
ERIC Educational Resources Information Center
Jover, Julio Lillo; Moreira, Humberto
2005-01-01
Four experiments evaluated AMLA temporal version accuracy to measure relative luminosity in people with and without color blindness and, consequently, to provide the essential information to avoid poor figure-background combinations in any possible "specific screen-specific observer" pair. Experiment 1 showed that two very different apparatus, a…
Geremew, Kumlachew; Gedefaw, Molla; Dagnew, Zewdu; Jara, Dube
2014-01-01
Traditional biomass has been the major source of cooking energy for major segment of Ethiopian population for thousands of years. Cognizant of this energy poverty, the Government of Ethiopia has been spending huge sum of money to increase hydroelectric power generating stations. To assess current levels and correlates of traditional cooking energy sources utilization. A community based cross-sectional study was conducted employing both quantitative and qualitative approaches on systematically selected 423 households for quantitative and purposively selected 20 people for qualitative parts. SPSS version 16 for windows was used to analyze the quantitative data. Logistic regression was fitted to assess possible associations and its strength was measured using odds ratio at 95% CI. Qualitative data were analyzed thematically. The study indicated that 95% of households still use traditional biomass for cooking. Those who were less knowledgeable about negative health and environmental effects of traditional cooking energy sources were seven and six times more likely to utilize them compared with those who were knowledgeable (AOR (95% CI) = 7.56 (1.635, 34.926), AOR (95% CI) = 6.68 (1.80, 24.385), resp.). The most outstanding finding of this study was that people use traditional energy for cooking mainly due to lack of the knowledge and their beliefs about food prepared using traditional energy. That means "...people still believe that food cooked with charcoal is believed to taste delicious than cooked with other means." The majority of households use traditional biomass for cooking due to lack of knowledge and belief. Therefore, mechanisms should be designed to promote electric energy and to teach the public about health effects of traditional cooking energy source.
A distributed algorithm for demand-side management: Selling back to the grid.
Latifi, Milad; Khalili, Azam; Rastegarnia, Amir; Zandi, Sajad; Bazzi, Wael M
2017-11-01
Demand side energy consumption scheduling is a well-known issue in the smart grid research area. However, there is lack of a comprehensive method to manage the demand side and consumer behavior in order to obtain an optimum solution. The method needs to address several aspects, including the scale-free requirement and distributed nature of the problem, consideration of renewable resources, allowing consumers to sell electricity back to the main grid, and adaptivity to a local change in the solution point. In addition, the model should allow compensation to consumers and ensurance of certain satisfaction levels. To tackle these issues, this paper proposes a novel autonomous demand side management technique which minimizes consumer utility costs and maximizes consumer comfort levels in a fully distributed manner. The technique uses a new logarithmic cost function and allows consumers to sell excess electricity (e.g. from renewable resources) back to the grid in order to reduce their electric utility bill. To develop the proposed scheme, we first formulate the problem as a constrained convex minimization problem. Then, it is converted to an unconstrained version using the segmentation-based penalty method. At each consumer location, we deploy an adaptive diffusion approach to obtain the solution in a distributed fashion. The use of adaptive diffusion makes it possible for consumers to find the optimum energy consumption schedule with a small number of information exchanges. Moreover, the proposed method is able to track drifts resulting from changes in the price parameters and consumer preferences. Simulations and numerical results show that our framework can reduce the total load demand peaks, lower the consumer utility bill, and improve the consumer comfort level.
Automated Instructional Management Systems (AIMS) Version III, Users Manual.
ERIC Educational Resources Information Center
New York Inst. of Tech., Old Westbury.
This document sets forth the procedures necessary to utilize and understand the operating characteristics of the Automated Instructional Management System - Version III, a computer-based system for management of educational processes. Directions for initialization, including internal and user files; system and operational input requirements;…
Active appearance model and deep learning for more accurate prostate segmentation on MRI
NASA Astrophysics Data System (ADS)
Cheng, Ruida; Roth, Holger R.; Lu, Le; Wang, Shijun; Turkbey, Baris; Gandler, William; McCreedy, Evan S.; Agarwal, Harsh K.; Choyke, Peter; Summers, Ronald M.; McAuliffe, Matthew J.
2016-03-01
Prostate segmentation on 3D MR images is a challenging task due to image artifacts, large inter-patient prostate shape and texture variability, and lack of a clear prostate boundary specifically at apex and base levels. We propose a supervised machine learning model that combines atlas based Active Appearance Model (AAM) with a Deep Learning model to segment the prostate on MR images. The performance of the segmentation method is evaluated on 20 unseen MR image datasets. The proposed method combining AAM and Deep Learning achieves a mean Dice Similarity Coefficient (DSC) of 0.925 for whole 3D MR images of the prostate using axial cross-sections. The proposed model utilizes the adaptive atlas-based AAM model and Deep Learning to achieve significant segmentation accuracy.
Cunningham, Charles E; Zipursky, Robert B; Christensen, Bruce K; Bieling, Peter J; Madsen, Victoria; Rimas, Heather; Mielko, Stephanie; Wilson, Fiona; Furimsky, Ivana; Jeffs, Lisa; Munn, Catharine
2017-01-01
We modeled design factors influencing the intent to use a university mental health service. Between November 2012 and October 2014, 909 undergraduates participated. Using a discrete choice experiment, participants chose between hypothetical campus mental health services. Latent class analysis identified three segments. A Psychological/Psychiatric Service segment (45.5%) was most likely to contact campus health services delivered by psychologists or psychiatrists. An Alternative Service segment (39.3%) preferred to talk to peer-counselors who had experienced mental health problems. A Hesitant segment (15.2%) reported greater distress but seemed less intent on seeking help. They preferred services delivered by psychologists or psychiatrists. Simulations predicted that, rather than waiting for standard counseling, the Alternative Service segment would prefer immediate access to E-Mental health. The Usual Care and Hesitant segments would wait 6 months for standard counseling. E-Mental Health options could engage students who may not wait for standard services.
Distance-based over-segmentation for single-frame RGB-D images
NASA Astrophysics Data System (ADS)
Fang, Zhuoqun; Wu, Chengdong; Chen, Dongyue; Jia, Tong; Yu, Xiaosheng; Zhang, Shihong; Qi, Erzhao
2017-11-01
Over-segmentation, known as super-pixels, is a widely used preprocessing step in segmentation algorithms. Oversegmentation algorithm segments an image into regions of perceptually similar pixels, but performs badly based on only color image in the indoor environments. Fortunately, RGB-D images can improve the performances on the images of indoor scene. In order to segment RGB-D images into super-pixels effectively, we propose a novel algorithm, DBOS (Distance-Based Over-Segmentation), which realizes full coverage of super-pixels on the image. DBOS fills the holes in depth images to fully utilize the depth information, and applies SLIC-like frameworks for fast running. Additionally, depth features such as plane projection distance are extracted to compute distance which is the core of SLIC-like frameworks. Experiments on RGB-D images of NYU Depth V2 dataset demonstrate that DBOS outperforms state-ofthe-art methods in quality while maintaining speeds comparable to them.
A general system for automatic biomedical image segmentation using intensity neighborhoods.
Chen, Cheng; Ozolek, John A; Wang, Wei; Rohde, Gustavo K
2011-01-01
Image segmentation is important with applications to several problems in biology and medicine. While extensively researched, generally, current segmentation methods perform adequately in the applications for which they were designed, but often require extensive modifications or calibrations before being used in a different application. We describe an approach that, with few modifications, can be used in a variety of image segmentation problems. The approach is based on a supervised learning strategy that utilizes intensity neighborhoods to assign each pixel in a test image its correct class based on training data. We describe methods for modeling rotations and variations in scales as well as a subset selection for training the classifiers. We show that the performance of our approach in tissue segmentation tasks in magnetic resonance and histopathology microscopy images, as well as nuclei segmentation from fluorescence microscopy images, is similar to or better than several algorithms specifically designed for each of these applications.
NASA Technical Reports Server (NTRS)
Tilton, James C.; Lawrence, William T.; Plaza, Antonio J.
2006-01-01
The hierarchical segmentation (HSEG) algorithm is a hybrid of hierarchical step-wise optimization and constrained spectral clustering that produces a hierarchical set of image segmentations. This segmentation hierarchy organizes image data in a manner that makes the image's information content more accessible for analysis by enabling region-based analysis. This paper discusses data analysis with HSEG and describes several measures of region characteristics that may be useful analyzing segmentation hierarchies for various applications. Segmentation hierarchy analysis for generating landwater and snow/ice masks from MODIS (Moderate Resolution Imaging Spectroradiometer) data was demonstrated and compared with the corresponding MODIS standard products. The masks based on HSEG segmentation hierarchies compare very favorably to the MODIS standard products. Further, the HSEG based landwater mask was specifically tailored to the MODIS data and the HSEG snow/ice mask did not require the setting of a critical threshold as required in the production of the corresponding MODIS standard product.
NASA Astrophysics Data System (ADS)
Zheng, Qiang; Li, Honglun; Fan, Baode; Wu, Shuanhu; Xu, Jindong
2017-12-01
Active contour model (ACM) has been one of the most widely utilized methods in magnetic resonance (MR) brain image segmentation because of its ability of capturing topology changes. However, most of the existing ACMs only consider single-slice information in MR brain image data, i.e., the information used in ACMs based segmentation method is extracted only from one slice of MR brain image, which cannot take full advantage of the adjacent slice images' information, and cannot satisfy the local segmentation of MR brain images. In this paper, a novel ACM is proposed to solve the problem discussed above, which is based on multi-variate local Gaussian distribution and combines the adjacent slice images' information in MR brain image data to satisfy segmentation. The segmentation is finally achieved through maximizing the likelihood estimation. Experiments demonstrate the advantages of the proposed ACM over the single-slice ACM in local segmentation of MR brain image series.
PAM4 silicon photonic microring resonator-based transceiver circuits
NASA Astrophysics Data System (ADS)
Palermo, Samuel; Yu, Kunzhi; Roshan-Zamir, Ashkan; Wang, Binhao; Li, Cheng; Seyedi, M. Ashkan; Fiorentino, Marco; Beausoleil, Raymond
2017-02-01
Increased data rates have motivated the investigation of advanced modulation schemes, such as four-level pulseamplitude modulation (PAM4), in optical interconnect systems in order to enable longer transmission distances and operation with reduced circuit bandwidth relative to non-return-to-zero (NRZ) modulation. Employing this modulation scheme in interconnect architectures based on high-Q silicon photonic microring resonator devices, which occupy small area and allow for inherent wavelength-division multiplexing (WDM), offers a promising solution to address the dramatic increase in datacenter and high-performance computing system I/O bandwidth demands. Two ring modulator device structures are proposed for PAM4 modulation, including a single phase shifter segment device driven with a multi-level PAM4 transmitter and a two-segment device driven by two simple NRZ (MSB/LSB) transmitters. Transmitter circuits which utilize segmented pulsed-cascode high swing output stages are presented for both device structures. Output stage segmentation is utilized in the single-segment device design for PAM4 voltage level control, while in the two-segment design it is used for both independent MSB/LSB voltage levels and impedance control for output eye skew compensation. The 65nm CMOS transmitters supply a 4.4Vppd output swing for 40Gb/s operation when driving depletion-mode microring modulators implemented in a 130nm SOI process, with the single- and two-segment designs achieving 3.04 and 4.38mW/Gb/s, respectively. A PAM4 optical receiver front-end is also described which employs a large input-stage feedback resistor transimpedance amplifier (TIA) cascaded with an adaptively-tuned continuous-time linear equalizer (CTLE) for improved sensitivity. Receiver linearity, critical in PAM4 systems, is achieved with a peak-detector-based automatic gain control (AGC) loop.
Boundary fitting based segmentation of fluorescence microscopy images
NASA Astrophysics Data System (ADS)
Lee, Soonam; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.
2015-03-01
Segmentation is a fundamental step in quantifying characteristics, such as volume, shape, and orientation of cells and/or tissue. However, quantification of these characteristics still poses a challenge due to the unique properties of microscopy volumes. This paper proposes a 2D segmentation method that utilizes a combination of adaptive and global thresholding, potentials, z direction refinement, branch pruning, end point matching, and boundary fitting methods to delineate tubular objects in microscopy volumes. Experimental results demonstrate that the proposed method achieves better performance than an active contours based scheme.
Segmentation of oil spills in SAR images by using discriminant cuts
NASA Astrophysics Data System (ADS)
Ding, Xianwen; Zou, Xiaolin
2018-02-01
The discriminant cut is used to segment the oil spills in synthetic aperture radar (SAR) images. The proposed approach is a region-based one, which is able to capture and utilize spatial information in SAR images. The real SAR images, i.e. ALOS-1 PALSAR and Sentinel-1 SAR images were collected and used to validate the accuracy of the proposed approach for oil spill segmentation in SAR images. The accuracy of the proposed approach is higher than that of the fuzzy C-means classification method.
ERIC Educational Resources Information Center
Florida State Univ., Tallahassee. Program of Vocational Education.
Part of a system by which local education agency (LEA) personnel may evaluate secondary and postsecondary vocational education programs, this fifth of eight components focuses on an analysis of the utilization of community resources. Utilization of the component is designed to open communication channels among all segments of the community so that…
Robotic Vision-Based Localization in an Urban Environment
NASA Technical Reports Server (NTRS)
Mchenry, Michael; Cheng, Yang; Matthies
2007-01-01
A system of electronic hardware and software, now undergoing development, automatically estimates the location of a robotic land vehicle in an urban environment using a somewhat imprecise map, which has been generated in advance from aerial imagery. This system does not utilize the Global Positioning System and does not include any odometry, inertial measurement units, or any other sensors except a stereoscopic pair of black-and-white digital video cameras mounted on the vehicle. Of course, the system also includes a computer running software that processes the video image data. The software consists mostly of three components corresponding to the three major image-data-processing functions: Visual Odometry This component automatically tracks point features in the imagery and computes the relative motion of the cameras between sequential image frames. This component incorporates a modified version of a visual-odometry algorithm originally published in 1989. The algorithm selects point features, performs multiresolution area-correlation computations to match the features in stereoscopic images, tracks the features through the sequence of images, and uses the tracking results to estimate the six-degree-of-freedom motion of the camera between consecutive stereoscopic pairs of images (see figure). Urban Feature Detection and Ranging Using the same data as those processed by the visual-odometry component, this component strives to determine the three-dimensional (3D) coordinates of vertical and horizontal lines that are likely to be parts of, or close to, the exterior surfaces of buildings. The basic sequence of processes performed by this component is the following: 1. An edge-detection algorithm is applied, yielding a set of linked lists of edge pixels, a horizontal-gradient image, and a vertical-gradient image. 2. Straight-line segments of edges are extracted from the linked lists generated in step 1. Any straight-line segments longer than an arbitrary threshold (e.g., 30 pixels) are assumed to belong to buildings or other artificial objects. 3. A gradient-filter algorithm is used to test straight-line segments longer than the threshold to determine whether they represent edges of natural or artificial objects. In somewhat oversimplified terms, the test is based on the assumption that the gradient of image intensity varies little along a segment that represents the edge of an artificial object.
Dynamic deformable models for 3D MRI heart segmentation
NASA Astrophysics Data System (ADS)
Zhukov, Leonid; Bao, Zhaosheng; Gusikov, Igor; Wood, John; Breen, David E.
2002-05-01
Automated or semiautomated segmentation of medical images decreases interstudy variation, observer bias, and postprocessing time as well as providing clincally-relevant quantitative data. In this paper we present a new dynamic deformable modeling approach to 3D segmentation. It utilizes recently developed dynamic remeshing techniques and curvature estimation methods to produce high-quality meshes. The approach has been implemented in an interactive environment that allows a user to specify an initial model and identify key features in the data. These features act as hard constraints that the model must not pass through as it deforms. We have employed the method to perform semi-automatic segmentation of heart structures from cine MRI data.
A Survey Version of Full-Profile Conjoint Analysis.
ERIC Educational Resources Information Center
Chrzan, Keith
Two studies were conducted to test the viability of a survey version of full-profile conjoint analysis. Conjoint analysis describes a variety of analytic techniques for measuring subjects'"utilities," or preferences for the individual attributes or levels of attributes that constitute objects under study. The first study compared the…
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Kuriakose, Jean W.; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Guo, Yanhui; Patel, Smita; Kazerooni, Ella A.
2012-03-01
Vessel segmentation is a fundamental step in an automated pulmonary embolism (PE) detection system. The purpose of this study is to improve the segmentation scheme for pulmonary vessels affected by PE and other lung diseases. We have developed a multiscale hierarchical vessel enhancement and segmentation (MHES) method for pulmonary vessel tree extraction based on the analysis of eigenvalues of Hessian matrices. However, it is difficult to segment the pulmonary vessels accurately under suboptimal conditions, such as vessels occluded by PEs, surrounded by lymphoid tissues or lung diseases, and crossing with other vessels. In this study, we developed a new vessel refinement method utilizing curved planar reformation (CPR) technique combined with optimal path finding method (MHES-CROP). The MHES segmented vessels straightened in the CPR volume was refined using adaptive gray level thresholding where the local threshold was obtained from least-square estimation of a spline curve fitted to the gray levels of the vessel along the straightened volume. An optimal path finding method based on Dijkstra's algorithm was finally used to trace the correct path for the vessel of interest. Two and eight CTPA scans were randomly selected as training and test data sets, respectively. Forty volumes of interest (VOIs) containing "representative" vessels were manually segmented by a radiologist experienced in CTPA interpretation and used as reference standard. The results show that, for the 32 test VOIs, the average percentage volume error relative to the reference standard was improved from 32.9+/-10.2% using the MHES method to 9.9+/-7.9% using the MHES-CROP method. The accuracy of vessel segmentation was improved significantly (p<0.05). The intraclass correlation coefficient (ICC) of the segmented vessel volume between the automated segmentation and the reference standard was improved from 0.919 to 0.988. Quantitative comparison of the MHES method and the MHES-CROP method with the reference standard was also evaluated by the Bland-Altman plot. This preliminary study indicates that the MHES-CROP method has the potential to improve PE detection.
An Evaluation of Research Replication with Q Method and Its Utility in Market Segmentation.
ERIC Educational Resources Information Center
Adams, R. C.
Precipitated by questions of using Q methodology in television market segmentation and of the replicability of such research, this paper reports on both a reexamination of 1968 research by Joseph M. Foley and an attempt to replicate Foley's study. By undertaking a reanalysis of the Foley data, the question of replication in Q method is addressed.…
NASA Technical Reports Server (NTRS)
Ryaciotaki-Boussalis, Helen A.; Wang, Shyh Jong
1989-01-01
The problem of vibration suppression in segmented reflector telescopes is considered. The decomposition of the structure into smaller components is discussed, and control laws for vibration suppression as well as conditions for stability at the local level are derived. These conditions and the properties of the interconnecting patterns are then utilized to obtain sufficient conditions for global stability.
Thomas, Benjamin J.; Galor, Anat; Nanji, Afshan A.; Sayyad, Fouad El; Wang, Jianhua; Dubovy, Sander R.; Joag, Madhura G.; Karp, Carol L.
2014-01-01
The development of optical coherence tomography (OCT) technology has helped to usher in a new era of in vivo diagnostic imaging of the eye. The utilization of OCT for imaging of the anterior segment and ocular surface has evolved from time-domain devices to spectral-domain devices with greater penetrance and resolution, providing novel images of anterior segment pathology to assist in diagnosis and management of disease. Ocular surface squamous neoplasia (OSSN) is one such pathology that has proven demonstrable by certain anterior segment OCT machines, specifically the newer devices capable of performing ultra high-resolution OCT (UHR-OCT). Distinctive features of OSSN on high resolution OCT allow for diagnosis and differentiation from other ocular surface pathologies. Subtle findings on these images help to characterize the OSSN lesions beyond what is apparent with the clinical examination, providing guidance for clinical management. The purpose of this review is to examine the published literature on the utilization of UHR-OCT for the diagnosis and management of OSSN, as well as to report novel uses of this technology and potential directions for its future development. PMID:24439046
New approach for segmentation and recognition of handwritten numeral strings
NASA Astrophysics Data System (ADS)
Sadri, Javad; Suen, Ching Y.; Bui, Tien D.
2004-12-01
In this paper, we propose a new system for segmentation and recognition of unconstrained handwritten numeral strings. The system uses a combination of foreground and background features for segmentation of touching digits. The method introduces new algorithms for traversing the top/bottom-foreground-skeletons of the touched digits, and for finding feature points on these skeletons, and matching them to build all the segmentation paths. For the first time a genetic representation is used to show all the segmentation hypotheses. Our genetic algorithm tries to search and evolve the population of candidate segmentations and finds the one with the highest confidence for its segmentation and recognition. We have also used a new method for feature extraction which lowers the variations in the shapes of the digits, and then a MLP neural network is utilized to produce the labels and confidence values for those digits. The NIST SD19 and CENPARMI databases are used for evaluating the system. Our system can get a correct segmentation-recognition rate of 96.07% with rejection rate of 2.61% which compares favorably with those that exist in the literature.
New approach for segmentation and recognition of handwritten numeral strings
NASA Astrophysics Data System (ADS)
Sadri, Javad; Suen, Ching Y.; Bui, Tien D.
2005-01-01
In this paper, we propose a new system for segmentation and recognition of unconstrained handwritten numeral strings. The system uses a combination of foreground and background features for segmentation of touching digits. The method introduces new algorithms for traversing the top/bottom-foreground-skeletons of the touched digits, and for finding feature points on these skeletons, and matching them to build all the segmentation paths. For the first time a genetic representation is used to show all the segmentation hypotheses. Our genetic algorithm tries to search and evolve the population of candidate segmentations and finds the one with the highest confidence for its segmentation and recognition. We have also used a new method for feature extraction which lowers the variations in the shapes of the digits, and then a MLP neural network is utilized to produce the labels and confidence values for those digits. The NIST SD19 and CENPARMI databases are used for evaluating the system. Our system can get a correct segmentation-recognition rate of 96.07% with rejection rate of 2.61% which compares favorably with those that exist in the literature.
Easy-interactive and quick psoriasis lesion segmentation
NASA Astrophysics Data System (ADS)
Ma, Guoli; He, Bei; Yang, Wenming; Shu, Chang
2013-12-01
This paper proposes an interactive psoriasis lesion segmentation algorithm based on Gaussian Mixture Model (GMM). Psoriasis is an incurable skin disease and affects large population in the world. PASI (Psoriasis Area and Severity Index) is the gold standard utilized by dermatologists to monitor the severity of psoriasis. Computer aid methods of calculating PASI are more objective and accurate than human visual assessment. Psoriasis lesion segmentation is the basis of the whole calculating. This segmentation is different from the common foreground/background segmentation problems. Our algorithm is inspired by GrabCut and consists of three main stages. First, skin area is extracted from the background scene by transforming the RGB values into the YCbCr color space. Second, a rough segmentation of normal skin and psoriasis lesion is given. This is an initial segmentation given by thresholding a single gaussian model and the thresholds are adjustable, which enables user interaction. Third, two GMMs, one for the initial normal skin and one for psoriasis lesion, are built to refine the segmentation. Experimental results demonstrate the effectiveness of the proposed algorithm.
Sensor-oriented feature usability evaluation in fingerprint segmentation
NASA Astrophysics Data System (ADS)
Li, Ying; Yin, Yilong; Yang, Gongping
2013-06-01
Existing fingerprint segmentation methods usually process fingerprint images captured by different sensors with the same feature or feature set. We propose to improve the fingerprint segmentation result in view of an important fact that images from different sensors have different characteristics for segmentation. Feature usability evaluation, which means to evaluate the usability of features to find the personalized feature or feature set for different sensors to improve the performance of segmentation. The need for feature usability evaluation for fingerprint segmentation is raised and analyzed as a new issue. To address this issue, we present a decision-tree-based feature-usability evaluation method, which utilizes a C4.5 decision tree algorithm to evaluate and pick the best suitable feature or feature set for fingerprint segmentation from a typical candidate feature set. We apply the novel method on the FVC2002 database of fingerprint images, which are acquired by four different respective sensors and technologies. Experimental results show that the accuracy of segmentation is improved, and time consumption for feature extraction is dramatically reduced with selected feature(s).
Tan, Li Kuo; Liew, Yih Miin; Lim, Einly; McLaughlin, Robert A
2017-07-01
Automated left ventricular (LV) segmentation is crucial for efficient quantification of cardiac function and morphology to aid subsequent management of cardiac pathologies. In this paper, we parameterize the complete (all short axis slices and phases) LV segmentation task in terms of the radial distances between the LV centerpoint and the endo- and epicardial contours in polar space. We then utilize convolutional neural network regression to infer these parameters. Utilizing parameter regression, as opposed to conventional pixel classification, allows the network to inherently reflect domain-specific physical constraints. We have benchmarked our approach primarily against the publicly-available left ventricle segmentation challenge (LVSC) dataset, which consists of 100 training and 100 validation cardiac MRI cases representing a heterogeneous mix of cardiac pathologies and imaging parameters across multiple centers. Our approach attained a .77 Jaccard index, which is the highest published overall result in comparison to other automated algorithms. To test general applicability, we also evaluated against the Kaggle Second Annual Data Science Bowl, where the evaluation metric was the indirect clinical measures of LV volume rather than direct myocardial contours. Our approach attained a Continuous Ranked Probability Score (CRPS) of .0124, which would have ranked tenth in the original challenge. With this we demonstrate the effectiveness of convolutional neural network regression paired with domain-specific features in clinical segmentation. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Larsen, J. D.; Schaap, M. G.
2013-12-01
Recent advances in computing technology and experimental techniques have made it possible to observe and characterize fluid dynamics at the micro-scale. Many computational methods exist that can adequately simulate fluid flow in porous media. Lattice Boltzmann methods provide the distinct advantage of tracking particles at the microscopic level and returning macroscopic observations. While experimental methods can accurately measure macroscopic fluid dynamics, computational efforts can be used to predict and gain insight into fluid dynamics by utilizing thin sections or computed micro-tomography (CMT) images of core sections. Although substantial effort have been made to advance non-invasive imaging methods such as CMT, fluid dynamics simulations, and microscale analysis, a true three dimensional image segmentation technique has not been developed until recently. Many competing segmentation techniques are utilized in industry and research settings with varying results. In this study lattice Boltzmann method is used to simulate stokes flow in a macroporous soil column. Two dimensional CMT images were used to reconstruct a three dimensional representation of the original sample. Six competing segmentation standards were used to binarize the CMT volumes which provide distinction between solid phase and pore space. The permeability of the reconstructed samples was calculated, with Darcy's Law, from lattice Boltzmann simulations of fluid flow in the samples. We compare simulated permeability from differing segmentation algorithms to experimental findings.
An improved parallel fuzzy connected image segmentation method based on CUDA.
Wang, Liansheng; Li, Dong; Huang, Shaohui
2016-05-12
Fuzzy connectedness method (FC) is an effective method for extracting fuzzy objects from medical images. However, when FC is applied to large medical image datasets, its running time will be greatly expensive. Therefore, a parallel CUDA version of FC (CUDA-kFOE) was proposed by Ying et al. to accelerate the original FC. Unfortunately, CUDA-kFOE does not consider the edges between GPU blocks, which causes miscalculation of edge points. In this paper, an improved algorithm is proposed by adding a correction step on the edge points. The improved algorithm can greatly enhance the calculation accuracy. In the improved method, an iterative manner is applied. In the first iteration, the affinity computation strategy is changed and a look up table is employed for memory reduction. In the second iteration, the error voxels because of asynchronism are updated again. Three different CT sequences of hepatic vascular with different sizes were used in the experiments with three different seeds. NVIDIA Tesla C2075 is used to evaluate our improved method over these three data sets. Experimental results show that the improved algorithm can achieve a faster segmentation compared to the CPU version and higher accuracy than CUDA-kFOE. The calculation results were consistent with the CPU version, which demonstrates that it corrects the edge point calculation error of the original CUDA-kFOE. The proposed method has a comparable time cost and has less errors compared to the original CUDA-kFOE as demonstrated in the experimental results. In the future, we will focus on automatic acquisition method and automatic processing.
The elastic ratio: introducing curvature into ratio-based image segmentation.
Schoenemann, Thomas; Masnou, Simon; Cremers, Daniel
2011-09-01
We present the first ratio-based image segmentation method that allows imposing curvature regularity of the region boundary. Our approach is a generalization of the ratio framework pioneered by Jermyn and Ishikawa so as to allow penalty functions that take into account the local curvature of the curve. The key idea is to cast the segmentation problem as one of finding cyclic paths of minimal ratio in a graph where each graph node represents a line segment. Among ratios whose discrete counterparts can be globally minimized with our approach, we focus in particular on the elastic ratio [Formula: see text] that depends, given an image I, on the oriented boundary C of the segmented region candidate. Minimizing this ratio amounts to finding a curve, neither small nor too curvy, through which the brightness flux is maximal. We prove the existence of minimizers for this criterion among continuous curves with mild regularity assumptions. We also prove that the discrete minimizers provided by our graph-based algorithm converge, as the resolution increases, to continuous minimizers. In contrast to most existing segmentation methods with computable and meaningful, i.e., nondegenerate, global optima, the proposed approach is fully unsupervised in the sense that it does not require any kind of user input such as seed nodes. Numerical experiments demonstrate that curvature regularity allows substantial improvement of the quality of segmentations. Furthermore, our results allow drawing conclusions about global optima of a parameterization-independent version of the snakes functional: the proposed algorithm allows determining parameter values where the functional has a meaningful solution and simultaneously provides the corresponding global solution.
Module 4: Text Versions | State, Local, and Tribal Governments | NREL
own or finance a system. We'll help you understand the different financing types available to local often specific to a particular segment of the market with different amounts of incentives, different system size caps, and different total funds or aggregate capacity. The customer can identify if solar PV
Meng, Qier; Kitasaka, Takayuki; Nimura, Yukitaka; Oda, Masahiro; Ueno, Junji; Mori, Kensaku
2017-02-01
Airway segmentation plays an important role in analyzing chest computed tomography (CT) volumes for computerized lung cancer detection, emphysema diagnosis and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3D airway tree structure from a CT volume is quite a challenging task. Several researchers have proposed automated airway segmentation algorithms basically based on region growing and machine learning techniques. However, these methods fail to detect the peripheral bronchial branches, which results in a large amount of leakage. This paper presents a novel approach for more accurate extraction of the complex airway tree. This proposed segmentation method is composed of three steps. First, Hessian analysis is utilized to enhance the tube-like structure in CT volumes; then, an adaptive multiscale cavity enhancement filter is employed to detect the cavity-like structure with different radii. In the second step, support vector machine learning will be utilized to remove the false positive (FP) regions from the result obtained in the previous step. Finally, the graph-cut algorithm is used to refine the candidate voxels to form an integrated airway tree. A test dataset including 50 standard-dose chest CT volumes was used for evaluating our proposed method. The average extraction rate was about 79.1 % with the significantly decreased FP rate. A new method of airway segmentation based on local intensity structure and machine learning technique was developed. The method was shown to be feasible for airway segmentation in a computer-aided diagnosis system for a lung and bronchoscope guidance system.
Investigating service features to sustain engagement in early intervention mental health services.
Becker, Mackenzie; Cunningham, Charles E; Christensen, Bruce K; Furimsky, Ivana; Rimas, Heather; Wilson, Fiona; Jeffs, Lisa; Madsen, Victoria; Bieling, Peter; Chen, Yvonne; Mielko, Stephanie; Zipursky, Robert B
2017-08-23
To understand what service features would sustain patient engagement in early intervention mental health treatment. Mental health patients, family members of individuals with mental illness and mental health professionals completed a survey consisting of 18 choice tasks that involved 14 different service attributes. Preferences were ascertained using importance and utility scores. Latent class analysis revealed segments characterized by distinct preferences. Simulations were carried out to estimate utilization of hypothetical clinical services. Overall, 333 patients and family members and 183 professionals (N = 516) participated. Respondents were distributed between a Professional segment (53%) and a Patient segment (47%) that differed in a number of their preferences including for appointment times, individual vs group sessions and mode of after-hours support. Members of both segments shared preferences for many of the service attributes including having crisis support available 24 h per day, having a choice of different treatment modalities, being offered help for substance use problems and having a focus on improving symptoms rather than functioning. Simulations predicted that 60% of the Patient segment thought patients would remain engaged with a Hospital service, while 69% of the Professional segment thought patients would be most likely to remain engaged with an E-Health service. Patients, family members and professionals shared a number of preferences about what service characteristics will optimize patient engagement in early intervention services but diverged on others. Providing effective crisis support as well as a range of treatment options should be prioritized in the future design of early intervention services. © 2017 John Wiley & Sons Australia, Ltd.
Globally optimal tumor segmentation in PET-CT images: a graph-based co-segmentation method.
Han, Dongfeng; Bayouth, John; Song, Qi; Taurani, Aakant; Sonka, Milan; Buatti, John; Wu, Xiaodong
2011-01-01
Tumor segmentation in PET and CT images is notoriously challenging due to the low spatial resolution in PET and low contrast in CT images. In this paper, we have proposed a general framework to use both PET and CT images simultaneously for tumor segmentation. Our method utilizes the strength of each imaging modality: the superior contrast of PET and the superior spatial resolution of CT. We formulate this problem as a Markov Random Field (MRF) based segmentation of the image pair with a regularized term that penalizes the segmentation difference between PET and CT. Our method simulates the clinical practice of delineating tumor simultaneously using both PET and CT, and is able to concurrently segment tumor from both modalities, achieving globally optimal solutions in low-order polynomial time by a single maximum flow computation. The method was evaluated on clinically relevant tumor segmentation problems. The results showed that our method can effectively make use of both PET and CT image information, yielding segmentation accuracy of 0.85 in Dice similarity coefficient and the average median hausdorff distance (HD) of 6.4 mm, which is 10% (resp., 16%) improvement compared to the graph cuts method solely using the PET (resp., CT) images.
NASA Astrophysics Data System (ADS)
Afifi, Ahmed; Nakaguchi, Toshiya; Tsumura, Norimichi
2010-03-01
In many medical applications, the automatic segmentation of deformable organs from medical images is indispensable and its accuracy is of a special interest. However, the automatic segmentation of these organs is a challenging task according to its complex shape. Moreover, the medical images usually have noise, clutter, or occlusion and considering the image information only often leads to meager image segmentation. In this paper, we propose a fully automated technique for the segmentation of deformable organs from medical images. In this technique, the segmentation is performed by fitting a nonlinear shape model with pre-segmented images. The kernel principle component analysis (KPCA) is utilized to capture the complex organs deformation and to construct the nonlinear shape model. The presegmentation is carried out by labeling each pixel according to its high level texture features extracted using the overcomplete wavelet packet decomposition. Furthermore, to guarantee an accurate fitting between the nonlinear model and the pre-segmented images, the particle swarm optimization (PSO) algorithm is employed to adapt the model parameters for the novel images. In this paper, we demonstrate the competence of proposed technique by implementing it to the liver segmentation from computed tomography (CT) scans of different patients.
Local and global evaluation for remote sensing image segmentation
NASA Astrophysics Data System (ADS)
Su, Tengfei; Zhang, Shengwei
2017-08-01
In object-based image analysis, how to produce accurate segmentation is usually a very important issue that needs to be solved before image classification or target recognition. The study for segmentation evaluation method is key to solving this issue. Almost all of the existent evaluation strategies only focus on the global performance assessment. However, these methods are ineffective for the situation that two segmentation results with very similar overall performance have very different local error distributions. To overcome this problem, this paper presents an approach that can both locally and globally quantify segmentation incorrectness. In doing so, region-overlapping metrics are utilized to quantify each reference geo-object's over and under-segmentation error. These quantified error values are used to produce segmentation error maps which have effective illustrative power to delineate local segmentation error patterns. The error values for all of the reference geo-objects are aggregated through using area-weighted summation, so that global indicators can be derived. An experiment using two scenes of very different high resolution images showed that the global evaluation part of the proposed approach was almost as effective as other two global evaluation methods, and the local part was a useful complement to comparing different segmentation results.
Wang, Yuliang; Zhang, Zaicheng; Wang, Huimin; Bi, Shusheng
2015-01-01
Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells. PMID:26066315
Sparse intervertebral fence composition for 3D cervical vertebra segmentation
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yang, Jian; Song, Shuang; Cong, Weijian; Jiao, Peifeng; Song, Hong; Ai, Danni; Jiang, Yurong; Wang, Yongtian
2018-06-01
Statistical shape models are capable of extracting shape prior information, and are usually utilized to assist the task of segmentation of medical images. However, such models require large training datasets in the case of multi-object structures, and it also is difficult to achieve satisfactory results for complex shapes. This study proposed a novel statistical model for cervical vertebra segmentation, called sparse intervertebral fence composition (SiFC), which can reconstruct the boundary between adjacent vertebrae by modeling intervertebral fences. The complex shape of the cervical spine is replaced by a simple intervertebral fence, which considerably reduces the difficulty of cervical segmentation. The final segmentation results are obtained by using a 3D active contour deformation model without shape constraint, which substantially enhances the recognition capability of the proposed method for objects with complex shapes. The proposed segmentation framework is tested on a dataset with CT images from 20 patients. A quantitative comparison against corresponding reference vertebral segmentation yields an overall mean absolute surface distance of 0.70 mm and a dice similarity index of 95.47% for cervical vertebral segmentation. The experimental results show that the SiFC method achieves competitive cervical vertebral segmentation performances, and completely eliminates inter-process overlap.
Modification to area navigation equipment for instrument two-segment approaches
NASA Technical Reports Server (NTRS)
1975-01-01
A two-segment aircraft landing approach concept utilizing an area random navigation (RNAV) system to execute the two-segment approach and eliminate the requirements for co-located distance measuring equipment (DME) was investigated. This concept permits non-precision approaches to be made to runways not equipped with ILS systems, down to appropriate minima. A hardware and software retrofit kit for the concept was designed, built, and tested on a DC-8-61 aircraft for flight evaluation. A two-segment approach profile and piloting procedure for that aircraft that will provide adequate safety margin under adverse weather, in the presence of system failures, and with the occurrence of an abused approach, was also developed. The two-segment approach procedure and equipment was demonstrated to line pilots under conditions which are representative of those encountered in air carrier service.
Themistocleous, Charalambos
2016-12-01
Although tonal alignment constitutes a quintessential property of pitch accents, its exact characteristics remain unclear. This study, by exploring the timing of the Cypriot Greek L*+H prenuclear pitch accent, examines the predictions of three hypotheses about tonal alignment: the invariance hypothesis, the segmental anchoring hypothesis, and the segmental anchorage hypothesis. The study reports on two experiments: the first of which manipulates the syllable patterns of the stressed syllable, and the second of which modifies the distance of the L*+H from the following pitch accent. The findings on the alignment of the low tone (L) are illustrative of the segmental anchoring hypothesis predictions: the L persistently aligns inside the onset consonant, a few milliseconds before the stressed vowel. However, the findings on the alignment of the high tone (H) are both intriguing and unexpected: the alignment of the H depends on the number of unstressed syllables that follow the prenuclear pitch accent. The 'wandering' of the H over multiple syllables is extremely rare among languages, and casts doubt on the invariance hypothesis and the segmental anchoring hypothesis, as well as indicating the need for a modified version of the segmental anchorage hypothesis. To address the alignment of the H, we suggest that it aligns within a segmental anchorage-the area that follows the prenuclear pitch accent-in such a way as to protect the paradigmatic contrast between the L*+H prenuclear pitch accent and the L+H* nuclear pitch accent.
Manufacture of a 1.7m prototype of the GMT primary mirror segments
NASA Astrophysics Data System (ADS)
Martin, H. M.; Burge, J. H.; Miller, S. M.; Smith, B. K.; Zehnder, R.; Zhao, C.
2006-06-01
We have nearly completed the manufacture of a 1.7 m off-axis mirror as part of the technology development for the Giant Magellan Telescope. The mirror is an off-axis section of a 5.3 m f/0.73 parent paraboloid, making it roughly a 1:5 model of the outer 8.4 m GMT segment. The 1.7 m mirror will be the primary mirror of the New Solar Telescope at Big Bear Solar Observatory. It has a 2.7 mm peak-to-valley departure from the best-fit sphere, presenting a serious challenge in terms of both polishing and measurement. The mirror was polished with a stressed lap, which bends actively to match the local curvature at each point on the mirror surface, and works for asymmetric mirrors as well as symmetric aspheres. It was measured using a hybrid reflective-diffractive null corrector to compensate for the mirror's asphericity. Both techniques will be applied in scaled-up versions to the GMT segments.
Adsorption of hairy particles with mobile ligands: Molecular dynamics and density functional study
NASA Astrophysics Data System (ADS)
Borówko, M.; Sokołowski, S.; Staszewski, T.; Pizio, O.
2018-01-01
We study models of hairy nanoparticles in contact with a hard wall. Each particle is built of a spherical core with a number of ligands attached to it and each ligand is composed of several spherical, tangentially jointed segments. The number of segments is the same for all ligands. Particular models differ by the numbers of ligands and of segments per ligand, but the total number of segments is constant. Moreover, our model assumes that the ligands are tethered to the core in such a manner that they can "slide" over the core surface. Using molecular dynamics simulations we investigate the differences in the structure of a system close to the wall. In order to characterize the distribution of the ligands around the core, we have calculated the end-to-end distances of the ligands and the lengths and orientation of the mass dipoles. Additionally, we also employed a density functional approach to obtain the density profiles. We have found that if the number of ligands is not too high, the proposed version of the theory is capable to predict the structure of the system with a reasonable accuracy.
Improvement in Recursive Hierarchical Segmentation of Data
NASA Technical Reports Server (NTRS)
Tilton, James C.
2006-01-01
A further modification has been made in the algorithm and implementing software reported in Modified Recursive Hierarchical Segmentation of Data (GSC- 14681-1), NASA Tech Briefs, Vol. 30, No. 6 (June 2006), page 51. That software performs recursive hierarchical segmentation of data having spatial characteristics (e.g., spectral-image data). The output of a prior version of the software contained artifacts, including spurious segmentation-image regions bounded by processing-window edges. The modification for suppressing the artifacts, mentioned in the cited article, was addition of a subroutine that analyzes data in the vicinities of seams to find pairs of regions that tend to lie adjacent to each other on opposite sides of the seams. Within each such pair, pixels in one region that are more similar to pixels in the other region are reassigned to the other region. The present modification provides for a parameter ranging from 0 to 1 for controlling the relative priority of merges between spatially adjacent and spatially non-adjacent regions. At 1, spatially-adjacent-/spatially- non-adjacent-region merges have equal priority. At 0, only spatially-adjacent-region merges (no spectral clustering) are allowed. Between 0 and 1, spatially-adjacent- region merges have priority over spatially- non-adjacent ones.
Automatic generation of pictorial transcripts of video programs
NASA Astrophysics Data System (ADS)
Shahraray, Behzad; Gibbon, David C.
1995-03-01
An automatic authoring system for the generation of pictorial transcripts of video programs which are accompanied by closed caption information is presented. A number of key frames, each of which represents the visual information in a segment of the video (i.e., a scene), are selected automatically by performing a content-based sampling of the video program. The textual information is recovered from the closed caption signal and is initially segmented based on its implied temporal relationship with the video segments. The text segmentation boundaries are then adjusted, based on lexical analysis and/or caption control information, to account for synchronization errors due to possible delays in the detection of scene boundaries or the transmission of the caption information. The closed caption text is further refined through linguistic processing for conversion to lower- case with correct capitalization. The key frames and the related text generate a compact multimedia presentation of the contents of the video program which lends itself to efficient storage and transmission. This compact representation can be viewed on a computer screen, or used to generate the input to a commercial text processing package to generate a printed version of the program.
Hedonic analysis of the price of UHT-treated milk in Italy.
Bimbo, Francesco; Bonanno, Alessandro; Liu, Xuan; Viscecchia, Rosaria
2016-02-01
The Italian market for UHT milk has been growing thanks to both consumers' interest in products with an extended shelf life and to the lower prices of these products compared with refrigerated, pasteurized milk. However, because the lower prices of UHT milk can hinder producers' margins, manufacturers have introduced new versions of UHT milk products such as lactose-free options, vitamin-enriched products, and milk for infants, with the goal of differentiating their products, escaping the price competition, and gaining higher margins. In this paper, we estimated the contribution of different attributes to UHT milk prices in Italy by using a database of Italian UHT milk sales and a hedonic price model. In our analysis, we considered 2 UHT milk market segments: products for infants and those for the general population. We found premiums varied with the milk's attributes as well as between the segments analyzed: n-3 fatty acids, organic, and added calcium were the most valuable product features in the general population segment, whereas in the infant segment fiber, glass packaging, and the targeting of newborns delivered the highest premiums. Finally, we present recommendations for UHT milk manufacturers. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Freyer, Marcus; Ale, Angelique; Schulz, Ralf B; Zientkowska, Marta; Ntziachristos, Vasilis; Englmeier, Karl-Hans
2010-01-01
The recent development of hybrid imaging scanners that integrate fluorescence molecular tomography (FMT) and x-ray computed tomography (XCT) allows the utilization of x-ray information as image priors for improving optical tomography reconstruction. To fully capitalize on this capacity, we consider a framework for the automatic and fast detection of different anatomic structures in murine XCT images. To accurately differentiate between different structures such as bone, lung, and heart, a combination of image processing steps including thresholding, seed growing, and signal detection are found to offer optimal segmentation performance. The algorithm and its utilization in an inverse FMT scheme that uses priors is demonstrated on mouse images.
A semi-automatic method for left ventricle volume estimate: an in vivo validation study
NASA Technical Reports Server (NTRS)
Corsi, C.; Lamberti, C.; Sarti, A.; Saracino, G.; Shiota, T.; Thomas, J. D.
2001-01-01
This study aims to the validation of the left ventricular (LV) volume estimates obtained by processing volumetric data utilizing a segmentation model based on level set technique. The validation has been performed by comparing real-time volumetric echo data (RT3DE) and magnetic resonance (MRI) data. A validation protocol has been defined. The validation protocol was applied to twenty-four estimates (range 61-467 ml) obtained from normal and pathologic subjects, which underwent both RT3DE and MRI. A statistical analysis was performed on each estimate and on clinical parameters as stroke volume (SV) and ejection fraction (EF). Assuming MRI estimates (x) as a reference, an excellent correlation was found with volume measured by utilizing the segmentation procedure (y) (y=0.89x + 13.78, r=0.98). The mean error on SV was 8 ml and the mean error on EF was 2%. This study demonstrated that the segmentation technique is reliably applicable on human hearts in clinical practice.
Eliciting Affect via Immersive Virtual Reality: A Tool for Adolescent Risk Reduction
Houck, Christopher D.; Barker, David H.; Garcia, Abbe Marrs; Spitalnick, Josh S.; Curtis, Virginia; Roye, Scott; Brown, Larry K.
2014-01-01
Objective A virtual reality environment (VRE) was designed to expose participants to substance use and sexual risk-taking cues to examine the utility of VR in eliciting adolescent physiological arousal. Methods 42 adolescents (55% male) with a mean age of 14.54 years (SD = 1.13) participated. Physiological arousal was examined through heart rate (HR), respiratory sinus arrhythmia (RSA), and self-reported somatic arousal. A within-subject design (neutral VRE, VR party, and neutral VRE) was utilized to examine changes in arousal. Results The VR party demonstrated an increase in physiological arousal relative to a neutral VRE. Examination of individual segments of the party (e.g., orientation, substance use, and sexual risk) demonstrated that HR was significantly elevated across all segments, whereas only the orientation and sexual risk segments demonstrated significant impact on RSA. Conclusions This study provides preliminary evidence that VREs can be used to generate physiological arousal in response to substance use and sexual risk cues. PMID:24365699
Hansen, J.D.; Landis, E.D.; Phillips, R.B.
2005-01-01
During the analysis of Ig superfamily members within the available rainbow trout (Oncorhynchus mykiss) EST gene index, we identified a unique Ig heavy-chain (IgH) isotype. cDNAs encoding this isotype are composed of a typical IgH leader sequence and a VDJ rearranged segment followed by four Ig superfamily C-1 domains represented as either membrane-bound or secretory versions. Because teleost fish were previously thought to encode and express only two IgH isotypes (IgM and IgD) for their humoral immune repertoire, we isolated all three cDNA isotypes from a single homozygous trout (OSU-142) to confirm that all three are indeed independent isotypes. Bioinformatic and phylogenetic analysis indicates that this previously undescribed divergent isotype is restricted to bony fish, thus we have named this isotype "IgT" (??) for teleost fish. Genomic sequence analysis of an OSU-142 bacterial artificial chromosome (BAC) clone positive for all three IgH isotypes revealed that IgT utilizes the standard rainbow trout VH families, but surprisingly, the IgT isotype possesses its own exclusive set of DH and JH elements for the generation of diversity. The IgT D and J segments and ?? constant (C) region genes are located upstream of the D and J elements for IgM, representing a genomic IgH architecture that has not been observed in any other vertebrate class. All three isotypes are primarily expressed in the spleen and pronephros (bone marrow equivalent), and ontogenically, expression of IgT is present 4 d before hatching in developing embryos. ?? 2005 by The National Academy of Sciences of the USA.
Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata
Chen, Yangzhou; Guo, Yuqi; Wang, Ying
2017-01-01
In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research. PMID:28353664
Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata.
Chen, Yangzhou; Guo, Yuqi; Wang, Ying
2017-03-29
In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research.
ERMes: Open Source Simplicity for Your E-Resource Management
ERIC Educational Resources Information Center
Doering, William; Chilton, Galadriel
2009-01-01
ERMes, the latest version of electronic resource management system (ERM), is a relational database; content in different tables connects to, and works with, content in other tables. ERMes requires Access 2007 (Windows) or Access 2008 (Mac) to operate as the database utilizes functionality not available in previous versions of Microsoft Access. The…
1990-09-01
1988). Current versions of the ADATS have CATE systems insLzlled, but the software is still under development by the radar manufacturer, Contraves ...Italiana, a subcontractor to Martin Marietta (USA). Contraves Italiana will deliver the final version of the software to Martin Marietta in 1991. Until then
RELEASE NOTES FOR MODELS-3 VERSION 4.1 PATCH: SMOKE TOOL AND FILE CONVERTER
This software patch to the Models-3 system corrects minor errors in the Models-3 framework, provides substantial improvements in the ASCII to I/O API format conversion of the File Converter utility, and new functionalities for the SMOKE Tool. Version 4.1 of the Models-3 system...
Evaluating the Learning Process of Mechanical CAD Students
ERIC Educational Resources Information Center
Hamade, R. F.; Artail, H. A.; Jaber, M. Y.
2007-01-01
There is little theoretical or experimental research on how beginner-level trainees learn CAD skills in formal training sessions. This work presents findings on how trainees develop their skills in utilizing a solid mechanical CAD tool (Pro/Engineer version 2000i[squared] and later version Wildfire). Exercises at the beginner and intermediate…
Modeled and Observed Altitude Distributions of the Micrometeoroid Influx in Radar Detection
NASA Astrophysics Data System (ADS)
Swarnalingam, N.; Janches, D.; Plane, J. M. C.; Carrillo-Sánchez, J. D.; Sternovsky, Z.; Pokorny, P.; Nesvorny, D.
2017-12-01
The altitude distributions of the micrometeoroids are a representation of the radar response function of the incoming flux and thus can be utilized to calibrate radar measurements. These in turn, can be used to determine the rate of ablation and ionization of the meteoroids and ultimately the input flux. During the ablation process, electrons are created and subsequently these electrons produce backscatter signals when they encounter the transmitted signals from radar. In this work, we investigate the altitude distribution by exploring different sizes as well as the aspect sensitivity of the meteor head echoes. We apply an updated version of the Chemical Ablation Model (CABMOD), which includes results from laboratory simulation of meteor ablation for different metallic constituents. In particular, the updated version simulates the ablation of Na. It is observed in the updated version that electrons are produced to a wider altitude range with the peak production occurs at lower altitudes compared to the previous version. The results are compared to head echo meteor observations utilizing the Arecibo 430 MHz radar.
Recent Theoretical Advances in Analysis of AIRS/AMSU Sounding Data
NASA Technical Reports Server (NTRS)
Susskind, Joel
2007-01-01
AIRS was launched on EOS Aqua on May 4,2002, together with AMSU-A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. This paper describes the AIRS Science Team Version 5.0 retrieval algorithm. Starting in early 2007, the Goddard DAAC will use this algorithm to analyze near real time AIRS/AMSU observations. These products are then made available to the scientific community for research purposes. The products include twice daily measurements of the Earth's three dimensional global temperature, water vapor, and ozone distribution as well as cloud cover. In addition, accurate twice daily measurements of the earth's land and ocean temperatures are derived and reported. Scientists use this important set of observations for two major applications. They provide important information for climate studies of global and regional variability and trends of different aspects of the earth's atmosphere. They also provide information for researchers to improve the skill of weather forecasting. A very important new product of the AIRS Version 5 algorithm is accurate case-by-case error estimates of the retrieved products. This heightens their utility for use in both weather and climate applications. These error estimates are also used directly for quality control of the retrieved products. Version 5 also allows for accurate quality controlled AIRS only retrievals, called "Version 5 AO retrievals" which can be used as a backup methodology if AMSU fails. Examples of the accuracy of error estimates and quality controlled retrieval products of the AIRS/AMSU Version 5 and Version 5 AO algorithms are given, and shown to be significantly better than the previously used Version 4 algorithm. Assimilation of Version 5 retrievals are also shown to significantly improve forecast skill, especially when the case-by-case error estimates are utilized in the data assimilation process.
Mell, Matthew; Tefera, Girma; Thornton, Frank; Siepman, David; Turnipseed, William
2007-03-01
The diagnostic accuracy of magnetic resonance angiography (MRA) in the infrapopliteal arterial segment is not well defined. This study evaluated the clinical utility and diagnostic accuracy of time-resolved imaging of contrast kinetics (TRICKS) MRA compared with digital subtraction contrast angiography (DSA) in planning for percutaneous interventions of popliteal and infrapopliteal arterial occlusive disease. Patients who underwent percutaneous lower extremity interventions for popliteal or tibial occlusive disease were identified for this study. Preprocedural TRICKS MRA was performed with 1.5 Tesla (GE Healthcare, Waukesha, Wis) magnetic resonance imaging scanners with a flexible peripheral vascular coil, using the TRICKS technique with gadodiamide injection. DSA was performed using standard techniques in angiography suite with a 15-inch image intensifier. DSA was considered the gold standard. The MRA and DSA were then evaluated in a blinded fashion by a radiologist and a vascular surgeon. The popliteal artery and tibioperoneal trunk were evaluated separately, and the tibial arteries were divided into proximal, mid, and distal segments. Each segment was interpreted as normal (0% to 49% stenosis), stenotic (50% to 99% stenosis), or occluded (100%). Lesion morphology was classified according to the TransAtlantic Inter-Society Consensus (TASC). We calculated concordance between the imaging studies and the sensitivity and specificity of MRA. The clinical utility of MRA was also assessed in terms of identifying arterial access site as well as predicting technical success of the percutaneous treatment. Comparisons were done on 150 arterial segments in 30 limbs of 27 patients. When evaluated by TASC classification, TRICKS MRA correlated with DSA in 83% of the popliteal and in 88% of the infrapopliteal segments. MRA correctly identified significant disease of the popliteal artery with a sensitivity of 94% and a specificity of 92%, and of the tibial arteries with a sensitivity of 100% and specificity of 84%. When used to evaluate for stenosis vs occlusion, MRA interpretation agreed with DSA 90% of the time. Disagreement occurred in 15 arterial segments, most commonly in distal tibioperoneal arteries. MRA misdiagnosed occlusion for stenosis in 11 of 15 segments, and stenosis for occlusion in four of 15 segments. Arterial access was accurately planned based on preprocedural MRA findings in 29 of 30 patients. MRA predicted technical success 83% of the time. Five technical failures were due to inability to cross arterial occlusions, all accurately identified by MRA. TRICKS MRA is an accurate method of evaluating patients for popliteal and infrapopliteal arterial occlusive disease and can be used for planning percutaneous interventions.
Object Segmentation Methods for Online Model Acquisition to Guide Robotic Grasping
NASA Astrophysics Data System (ADS)
Ignakov, Dmitri
A vision system is an integral component of many autonomous robots. It enables the robot to perform essential tasks such as mapping, localization, or path planning. A vision system also assists with guiding the robot's grasping and manipulation tasks. As an increased demand is placed on service robots to operate in uncontrolled environments, advanced vision systems must be created that can function effectively in visually complex and cluttered settings. This thesis presents the development of segmentation algorithms to assist in online model acquisition for guiding robotic manipulation tasks. Specifically, the focus is placed on localizing door handles to assist in robotic door opening, and on acquiring partial object models to guide robotic grasping. First, a method for localizing a door handle of unknown geometry based on a proposed 3D segmentation method is presented. Following segmentation, localization is performed by fitting a simple box model to the segmented handle. The proposed method functions without requiring assumptions about the appearance of the handle or the door, and without a geometric model of the handle. Next, an object segmentation algorithm is developed, which combines multiple appearance (intensity and texture) and geometric (depth and curvature) cues. The algorithm is able to segment objects without utilizing any a priori appearance or geometric information in visually complex and cluttered environments. The segmentation method is based on the Conditional Random Fields (CRF) framework, and the graph cuts energy minimization technique. A simple and efficient method for initializing the proposed algorithm which overcomes graph cuts' reliance on user interaction is also developed. Finally, an improved segmentation algorithm is developed which incorporates a distance metric learning (DML) step as a means of weighing various appearance and geometric segmentation cues, allowing the method to better adapt to the available data. The improved method also models the distribution of 3D points in space as a distribution of algebraic distances from an ellipsoid fitted to the object, improving the method's ability to predict which points are likely to belong to the object or the background. Experimental validation of all methods is performed. Each method is evaluated in a realistic setting, utilizing scenarios of various complexities. Experimental results have demonstrated the effectiveness of the handle localization method, and the object segmentation methods.
Advanced X-ray Imaging Crystal Spectrometer for Magnetic Fusion Tokamak Devices
NASA Astrophysics Data System (ADS)
Lee, S. G.; Bak, J. G.; Bog, M. G.; Nam, U. W.; Moon, M. K.; Cheon, J. K.
2008-03-01
An advanced X-ray imaging crystal spectrometer is currently under development using a segmented position sensitive detector and time-to-digital converter (TDC) based delay-line readout electronics for burning plasma diagnostics. The proposed advanced XICS utilizes an eight-segmented position sensitive multi-wire proportional counter and supporting electronics to increase the spectrometer performance includes the photon count-rate capability and spatial resolution.
Solar Advisor Model User Guide for Version 2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilman, P.; Blair, N.; Mehos, M.
2008-08-01
The Solar Advisor Model (SAM) provides a consistent framework for analyzing and comparing power system costs and performance across the range of solar technologies and markets, from photovoltaic systems for residential and commercial markets to concentrating solar power and large photovoltaic systems for utility markets. This manual describes Version 2.0 of the software, which can model photovoltaic and concentrating solar power technologies for electric applications for several markets. The current version of the Solar Advisor Model does not model solar heating and lighting technologies.
Long-term reliability of ImPACT in professional ice hockey.
Echemendia, Ruben J; Bruce, Jared M; Meeuwisse, Willem; Comper, Paul; Aubry, Mark; Hutchison, Michael
2016-02-01
This study sought to assess the test-retest reliability of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) across 2-4 year time intervals and evaluate the utility of a newly proposed two-factor (Speed/Memory) model of ImPACT across multiple language versions. Test-retest data were collected from non-concussed National Hockey League (NHL) players across 2-, 3-, and 4-year time intervals. The two-factor model was examined using different language versions (English, French, Czech, Swedish) of the test using a one-year interval, and across 2-4 year intervals using the English version of the test. The two-factor Speed index improved reliability across multiple language versions of ImPACT. The Memory factor also improved but reliability remained below the traditional cutoff of .70 for use in clinical decision-making. ImPACT reliabilities remained low (below .70) regardless of whether the four-composite or the two-factor model was used across 2-, 3-, and 4-year time intervals. The two-factor approach increased ImPACT's one-year reliability over the traditional four-composite model among NHL players. The increased stability in test scores improves the test's ability to detect cognitive changes following injury, which increases the diagnostic utility of the test and allows for better return to play decision-making by reducing the risk of exposing an athlete to additional trauma while the brain may be at a heightened vulnerability to such trauma. Although the Speed Index increases the clinical utility of the test, the stability of the Memory index remains low. Irrespective of whether the two-factor or traditional four-composite approach is used, these data suggest that new baselines should occur on a yearly basis in order to maximize clinical utility.
Ghezeljeh, Tahereh Najafi; Ardebili, Fatimah Mohades; Rafii, Forough; Hagani, Hamid
2013-09-01
Burn as a traumatic life incident manifests severe pain and psychological problems. Specific instruments are needed to evaluate burn patients' psychological issues related to the injury. The aim of this study was to translate and evaluate the reliability and validity of the Persian versions of Impact of Burn Specific Pain Anxiety scale (BSPAS) and Impact of Event Scale (IES). In this cross-sectional study, convenience sampling method was utilized to select 55 Iranian hospitalized burn patients. Combined translation was utilized for translating scales. Alpha cronbach, item-total correlation, convergent and discriminative validity were evaluated. The Cronbach's α for both BSPAS- and IES-Persian version was 0.96. Item-total correlation coefficients ranged from 0.70 to 0.90. Convergent construct validity was confirmed by indicating high correlation between the scales designed to measure the same concepts. The mean score of BSPAS- and IES-Persian version was lower for individuals with a lower TBSA burn percentage which assessed discriminative construct validity of scales. BSPAS- and IES-Persian version showed high internal consistency and good validity for the assessment of burn psychological outcome in hospitalized burn patients. Future studies are needed to determine repeatability, factor structure, sensitivity and specificity of the scales. Copyright © 2013 Elsevier Ltd and ISBI. All rights reserved.
An Efficient Pipeline for Abdomen Segmentation in CT Images.
Koyuncu, Hasan; Ceylan, Rahime; Sivri, Mesut; Erdogan, Hasan
2018-04-01
Computed tomography (CT) scans usually include some disadvantages due to the nature of the imaging procedure, and these handicaps prevent accurate abdomen segmentation. Discontinuous abdomen edges, bed section of CT, patient information, closeness between the edges of the abdomen and CT, poor contrast, and a narrow histogram can be regarded as the most important handicaps that occur in abdominal CT scans. Currently, one or more handicaps can arise and prevent technicians obtaining abdomen images through simple segmentation techniques. In other words, CT scans can include the bed section of CT, a patient's diagnostic information, low-quality abdomen edges, low-level contrast, and narrow histogram, all in one scan. These phenomena constitute a challenge, and an efficient pipeline that is unaffected by handicaps is required. In addition, analysis such as segmentation, feature selection, and classification has meaning for a real-time diagnosis system in cases where the abdomen section is directly used with a specific size. A statistical pipeline is designed in this study that is unaffected by the handicaps mentioned above. Intensity-based approaches, morphological processes, and histogram-based procedures are utilized to design an efficient structure. Performance evaluation is realized in experiments on 58 CT images (16 training, 16 test, and 26 validation) that include the abdomen and one or more disadvantage(s). The first part of the data (16 training images) is used to detect the pipeline's optimum parameters, while the second and third parts are utilized to evaluate and to confirm the segmentation performance. The segmentation results are presented as the means of six performance metrics. Thus, the proposed method achieves remarkable average rates for training/test/validation of 98.95/99.36/99.57% (jaccard), 99.47/99.67/99.79% (dice), 100/99.91/99.91% (sensitivity), 98.47/99.23/99.85% (specificity), 99.38/99.63/99.87% (classification accuracy), and 98.98/99.45/99.66% (precision). In summary, a statistical pipeline performing the task of abdomen segmentation is achieved that is not affected by the disadvantages, and the most detailed abdomen segmentation study is performed for the use before organ and tumor segmentation, feature extraction, and classification.
Upgrading Custom Simulink Library Components for Use in Newer Versions of Matlab
NASA Technical Reports Server (NTRS)
Stewart, Camiren L.
2014-01-01
The Spaceport Command and Control System (SCCS) at Kennedy Space Center (KSC) is a control system for monitoring and launching manned launch vehicles. Simulations of ground support equipment (GSE) and the launch vehicle systems are required throughout the life cycle of SCCS to test software, hardware, and procedures to train the launch team. The simulations of the GSE at the launch site in conjunction with off-line processing locations are developed using Simulink, a piece of Commercial Off-The-Shelf (COTS) software. The simulations that are built are then converted into code and ran in a simulation engine called Trick, a Government off-the-shelf (GOTS) piece of software developed by NASA. In the world of hardware and software, it is not uncommon to see the products that are utilized be upgraded and patched or eventually fade away into an obsolete status. In the case of SCCS simulation software, Matlab, a MathWorks product, has released a number of stable versions of Simulink since the deployment of the software on the Development Work Stations in the Linux environment (DWLs). The upgraded versions of Simulink has introduced a number of new tools and resources that, if utilized fully and correctly, will save time and resources during the overall development of the GSE simulation and its correlating documentation. Unfortunately, simply importing the already built simulations into the new Matlab environment will not suffice as it will produce results that may not be expected as they were in the version that is currently being utilized. Thus, an upgrade execution plan was developed and executed to fully upgrade the simulation environment to one of the latest versions of Matlab.
Della, Lindsay J; DeJoy, David M; Lance, Charles E
2008-01-01
Fruit and vegetable consumption affects the etiology of cardiovascular disease as well as many different types of cancers. Still, Americans' consumption of fruit and vegetables is low. This article builds on initial research that assessed the validity of using a consumer-based psychographic audience segmentation in tandem with the theory of planned behavior to explain differences among individuals' consumption of fruit and vegetables. In this article, we integrate the findings from our initial analyses with media and purchase data from each audience segment. We then propose distinct, tailored program suggestions for reinventing social marketing programs focused on increasing fruit and vegetable consumption in each segment. Finally, we discuss the implications of utilizing a consumer-based psychographic audience segmentation versus a more traditional readiness-to-change social marketing segmentation. Differences between these two segmentation strategies, such as the ability to access media usage and purchase data, are highlighted and discussed.
[RSF model optimization and its application to brain tumor segmentation in MRI].
Cheng, Zhaoning; Song, Zhijian
2013-04-01
Magnetic resonance imaging (MRI) is usually obscure and non-uniform in gray, and the tumors inside are poorly circumscribed, hence the automatic tumor segmentation in MRI is very difficult. Region-scalable fitting (RSF) energy model is a new segmentation approach for some uneven grayscale images. However, the level set formulation (LSF) of RSF model is not suitable for the environment with different grey level distribution inside and outside the intial contour, and the complex intensity environment of MRI always makes it hard to get ideal segmentation results. Therefore, we improved the model by a new LSF and combined it with the mean shift method, which can be helpful for tumor segmentation and has better convergence and target direction. The proposed method has been utilized in a series of studies for real MRI images, and the results showed that it could realize fast, accurate and robust segmentations for brain tumors in MRI, which has great clinical significance.
DeJoy, David M.; Lance, Charles E.
2014-01-01
Fruit and vegetable consumption impacts the etiology of cardiovascular disease as well as many different types of cancers. Still, Americans' consumption of fruit and vegetables is low. This article builds on initial research that assessed the validity of using a consumer-based psychographic audience segmentation in tandem with the theory of planned behavior to explain differences among individuals' consumption of fruit and vegetables. In this article, we integrate the findings from our initial analyses with media and purchase data from each audience segment. We then propose distinct, tailored program suggestions for reinventing social marketing programs focused on increasing fruit and vegetable consumption in each segment. Finally, we discuss the implications of utilizing a consumer-based psychographic audience segmentation versus more traditional readiness-to-change social marketing segmentation. Differences between these two segmentation strategies, such as the ability to access media usage and purchase data, are highlighted and discussed. PMID:18935880
Reshmi, S K; Sudha, M L; Shashirekha, M N
2017-12-01
The present investigation was undertaken to develop paranthas suiting diabetic population with added health benefits. Paranthas were prepared using fresh and dry segments of pomelo. The increase in the concentration of segments decreased the texture value from 1080 to 1022 g force (fresh segments) and 1005 to 870 g force (dry segments). Naringin along with other bioactive compounds were retained to a greater extent in Paranthas containing dry pomelo fruit segments. Paranthas prepared using 20% (fresh) and 5% (dry) were sensorily acceptable. The pomelo incorporated paranthas had higher levels of resistance starch fractions (12.94%) with low predicted glycemic index (49.89%) compared to control Paranthas at 5.54 and 58.64% respectively. The fortified paranthas with an considerable content of bioactive compounds and low glycemic index indicate the possibility of using it as a dietary supplement. Thus utilization of pomelo fortification helps in improving the nutritional and functional property of paranthas suiting diabetes as well as general population.
Compound image segmentation of published biomedical figures.
Li, Pengyuan; Jiang, Xiangying; Kambhamettu, Chandra; Shatkay, Hagit
2018-04-01
Images convey essential information in biomedical publications. As such, there is a growing interest within the bio-curation and the bio-databases communities, to store images within publications as evidence for biomedical processes and for experimental results. However, many of the images in biomedical publications are compound images consisting of multiple panels, where each individual panel potentially conveys a different type of information. Segmenting such images into constituent panels is an essential first step toward utilizing images. In this article, we develop a new compound image segmentation system, FigSplit, which is based on Connected Component Analysis. To overcome shortcomings typically manifested by existing methods, we develop a quality assessment step for evaluating and modifying segmentations. Two methods are proposed to re-segment the images if the initial segmentation is inaccurate. Experimental results show the effectiveness of our method compared with other methods. The system is publicly available for use at: https://www.eecis.udel.edu/~compbio/FigSplit. The code is available upon request. shatkay@udel.edu. Supplementary data are available online at Bioinformatics.
Multi-class segmentation of neuronal electron microscopy images using deep learning
NASA Astrophysics Data System (ADS)
Khobragade, Nivedita; Agarwal, Chirag
2018-03-01
Study of connectivity of neural circuits is an essential step towards a better understanding of functioning of the nervous system. With the recent improvement in imaging techniques, high-resolution and high-volume images are being generated requiring automated segmentation techniques. We present a pixel-wise classification method based on Bayesian SegNet architecture. We carried out multi-class segmentation on serial section Transmission Electron Microscopy (ssTEM) images of Drosophila third instar larva ventral nerve cord, labeling the four classes of neuron membranes, neuron intracellular space, mitochondria and glia / extracellular space. Bayesian SegNet was trained using 256 ssTEM images of 256 x 256 pixels and tested on 64 different ssTEM images of the same size, from the same serial stack. Due to high class imbalance, we used a class-balanced version of Bayesian SegNet by re-weighting each class based on their relative frequency. We achieved an overall accuracy of 93% and a mean class accuracy of 88% for pixel-wise segmentation using this encoder-decoder approach. On evaluating the segmentation results using similarity metrics like SSIM and Dice Coefficient, we obtained scores of 0.994 and 0.886 respectively. Additionally, we used the network trained using the 256 ssTEM images of Drosophila third instar larva for multi-class labeling of ISBI 2012 challenge ssTEM dataset.
B lymphocyte selection and age-related changes in VH gene usage in mutant Alicia rabbits.
Zhu, X; Boonthum, A; Zhai, S K; Knight, K L
1999-09-15
Young Alicia rabbits use VHa-negative genes, VHx and VHy, in most VDJ genes, and their serum Ig is VHa negative. However, as Alicia rabbits age, VHa2 allotype Ig is produced at high levels. We investigated which VH gene segments are used in the VDJ genes of a2 Ig-secreting hybridomas and of a2 Ig+ B cells from adult Alicia rabbits. We found that 21 of the 25 VDJ genes used the a2-encoding genes, VH4 or VH7; the other four VDJ genes used four unknown VH gene segments. Because VH4 and VH7 are rarely found in VDJ genes of normal or young Alicia rabbits, we investigated the timing of rearrangement of these genes in Alicia rabbits. During fetal development, VH4 was used in 60-80% of nonproductively rearranged VDJ genes, and VHx and VHy together were used in 10-26%. These data indicate that during B lymphopoiesis VH4 is preferentially rearranged. However, the percentage of productive VHx- and VHy-utilizing VDJ genes increased from 38% at day 21 of gestation to 89% at birth (gestation day 31), whereas the percentage of VH4-utilizing VDJ genes remained at 15%. These data suggest that during fetal development, either VH4-utilizing B-lineage cells are selectively eliminated, or B cells with VHx- and VHy-utilizing VDJ genes are selectively expanded, or both. The accumulation of peripheral VH4-utilizing a2 B cells with age indicates that these B cells might be selectively expanded in the periphery. We discuss the possible selection mechanisms that regulate VH gene segment usage in rabbit B cells during lymphopoiesis and in the periphery.
Multiresolution saliency map based object segmentation
NASA Astrophysics Data System (ADS)
Yang, Jian; Wang, Xin; Dai, ZhenYou
2015-11-01
Salient objects' detection and segmentation are gaining increasing research interest in recent years. A saliency map can be obtained from different models presented in previous studies. Based on this saliency map, the most salient region (MSR) in an image can be extracted. This MSR, generally a rectangle, can be used as the initial parameters for object segmentation algorithms. However, to our knowledge, all of those saliency maps are represented in a unitary resolution although some models have even introduced multiscale principles in the calculation process. Furthermore, some segmentation methods, such as the well-known GrabCut algorithm, need more iteration time or additional interactions to get more precise results without predefined pixel types. A concept of a multiresolution saliency map is introduced. This saliency map is provided in a multiresolution format, which naturally follows the principle of the human visual mechanism. Moreover, the points in this map can be utilized to initialize parameters for GrabCut segmentation by labeling the feature pixels automatically. Both the computing speed and segmentation precision are evaluated. The results imply that this multiresolution saliency map-based object segmentation method is simple and efficient.
SIPSMetGen: It's Not Just For Aircraft Data and ECS Anymore.
NASA Astrophysics Data System (ADS)
Schwab, M.
2015-12-01
The SIPSMetGen utility, developed for the NASA EOSDIS project, under the EED contract, simplified the creation of file level metadata for the ECS System. The utility has been enhanced for ease of use, efficiency, speed and increased flexibility. The SIPSMetGen utility was originally created as a means of generating file level spatial metadata for Operation IceBridge. The first version created only ODL metadata, specific for ingest into ECS. The core strength of the utility was, and continues to be, its ability to take complex shapes and patterns of data collection point clouds from aircraft flights and simplify them to a relatively simple concave hull geo-polygon. It has been found to be a useful and easy to use tool for creating file level metadata for many other missions, both aircraft and satellite. While the original version was useful it had its limitations. In 2014 Raytheon was tasked to make enhancements to SIPSMetGen, this resulted a new version of SIPSMetGen which can create ISO Compliant XML metadata; provides optimization and streamlining of the algorithm for creating the spatial metadata; a quicker runtime with more consistent results; a utility that can be configured to run multi-threaded on systems with multiple processors. The utility comes with a java based graphical user interface to aid in configuration and running of the utility. The enhanced SIPSMetGen allows more diverse data sets to be archived with file level metadata. The advantage of archiving data with file level metadata is that it makes it easier for data users, and scientists to find relevant data. File level metadata unlocks the power of existing archives and metadata repositories such as ECS and CMR and search and discovery utilities like Reverb and Earth Data Search. Current missions now using SIPSMetGen include: Aquarius, Measures, ARISE, and Nimbus.
VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images.
Chen, Hao; Dou, Qi; Yu, Lequan; Qin, Jing; Heng, Pheng-Ann
2018-04-15
Segmentation of key brain tissues from 3D medical images is of great significance for brain disease diagnosis, progression assessment and monitoring of neurologic conditions. While manual segmentation is time-consuming, laborious, and subjective, automated segmentation is quite challenging due to the complicated anatomical environment of brain and the large variations of brain tissues. We propose a novel voxelwise residual network (VoxResNet) with a set of effective training schemes to cope with this challenging problem. The main merit of residual learning is that it can alleviate the degradation problem when training a deep network so that the performance gains achieved by increasing the network depth can be fully leveraged. With this technique, our VoxResNet is built with 25 layers, and hence can generate more representative features to deal with the large variations of brain tissues than its rivals using hand-crafted features or shallower networks. In order to effectively train such a deep network with limited training data for brain segmentation, we seamlessly integrate multi-modality and multi-level contextual information into our network, so that the complementary information of different modalities can be harnessed and features of different scales can be exploited. Furthermore, an auto-context version of the VoxResNet is proposed by combining the low-level image appearance features, implicit shape information, and high-level context together for further improving the segmentation performance. Extensive experiments on the well-known benchmark (i.e., MRBrainS) of brain segmentation from 3D magnetic resonance (MR) images corroborated the efficacy of the proposed VoxResNet. Our method achieved the first place in the challenge out of 37 competitors including several state-of-the-art brain segmentation methods. Our method is inherently general and can be readily applied as a powerful tool to many brain-related studies, where accurate segmentation of brain structures is critical. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hatze, Herbert; Baca, Arnold
1993-01-01
The development of noninvasive techniques for the determination of biomechanical body segment parameters (volumes, masses, the three principal moments of inertia, the three local coordinates of the segmental mass centers, etc.) receives increasing attention from the medical sciences (e,.g., orthopaedic gait analysis), bioengineering, sport biomechanics, and the various space programs. In the present paper, a novel method is presented for determining body segment parameters rapidly and accurately. It is based on the video-image processing of four different body configurations and a finite mass-element human body model. The four video images of the subject in question are recorded against a black background, thus permitting the application of shape recognition procedures incorporating edge detection and calibration algorithms. In this way, a total of 181 object space dimensions of the subject's body segments can be reconstructed and used as anthropometric input data for the mathematical finite mass- element body model. The latter comprises 17 segments (abdomino-thoracic, head-neck, shoulders, upper arms, forearms, hands, abdomino-pelvic, thighs, lower legs, feet) and enables the user to compute all the required segment parameters for each of the 17 segments by means of the associated computer program. The hardware requirements are an IBM- compatible PC (1 MB memory) operating under MS-DOS or PC-DOS (Version 3.1 onwards) and incorporating a VGA-board with a feature connector for connecting it to a super video windows framegrabber board for which there must be available a 16-bit large slot. In addition, a VGA-monitor (50 - 70 Hz, horizontal scan rate at least 31.5 kHz), a common video camera and recorder, and a simple rectangular calibration frame are required. The advantage of the new method lies in its ease of application, its comparatively high accuracy, and in the rapid availability of the body segment parameters, which is particularly useful in clinical practice. An example of its practical application illustrates the technique.
NASA Astrophysics Data System (ADS)
2013-01-01
Due to a production error, the article 'Corrigendum: Task-based evaluation of segmentation algorithms for diffusion-weighted MRI without using a gold standard' by Abhinav K Jha, Matthew A Kupinski, Jeffrey J Rodriguez, Renu M Stephen and Alison T Stopeck was duplicated and the article 'Corrigendum: Complete electrode model in EEG: relationship and differences to the point electrode model' by S Pursiainen, F Lucka and C H Wolters was omitted in the print version of Physics in Medicine & Biology, volume 58, issue 1. The online versions of both articles are not affected. The article 'Corrigendum: Complete electrode model in EEG: relationship and differences to the point electrode model' by S Pursiainen, F Lucka and C H Wolters will be included in the print version of this issue (Physics in Medicine & Biology, volume 58, issue 2.) We apologise unreservedly for this error. Jon Ruffle Publisher
Adaptive partially hidden Markov models with application to bilevel image coding.
Forchhammer, S; Rasmussen, T S
1999-01-01
Partially hidden Markov models (PHMMs) have previously been introduced. The transition and emission/output probabilities from hidden states, as known from the HMMs, are conditioned on the past. This way, the HMM may be applied to images introducing the dependencies of the second dimension by conditioning. In this paper, the PHMM is extended to multiple sequences with a multiple token version and adaptive versions of PHMM coding are presented. The different versions of the PHMM are applied to lossless bilevel image coding. To reduce and optimize the model cost and size, the contexts are organized in trees and effective quantization of the parameters is introduced. The new coding methods achieve results that are better than the JBIG standard on selected test images, although at the cost of increased complexity. By the minimum description length principle, the methods presented for optimizing the code length may apply as guidance for training (P)HMMs for, e.g., segmentation or recognition purposes. Thereby, the PHMM models provide a new approach to image modeling.
Socio-Culturally Oriented Plan Discovery Environment (SCOPE)
2005-05-01
U.S. intelligence methods (Dr. George Friedman ( 2003 ) Saddam Hussein and the Dollar War. THE STRATFOR WEEKLY 18 December) 8 2.2. Evidence... 2003 ). In the EAGLE setting, we are using a modified version of the fuzzy segmentation algorithm developed by Udupa and his associates to...based (Fu, et al, 2003 ) and a cognitive model based (Eilbert, et al., 2002) algorithms, and a method for combining the results. (The method for
Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations.
Zala, Sarah M; Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J
2017-01-01
House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4-12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a 'gold standard' reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community.
Hall, L O; Bensaid, A M; Clarke, L P; Velthuizen, R P; Silbiger, M S; Bezdek, J C
1992-01-01
Magnetic resonance (MR) brain section images are segmented and then synthetically colored to give visual representations of the original data with three approaches: the literal and approximate fuzzy c-means unsupervised clustering algorithms, and a supervised computational neural network. Initial clinical results are presented on normal volunteers and selected patients with brain tumors surrounded by edema. Supervised and unsupervised segmentation techniques provide broadly similar results. Unsupervised fuzzy algorithms were visually observed to show better segmentation when compared with raw image data for volunteer studies. For a more complex segmentation problem with tumor/edema or cerebrospinal fluid boundary, where the tissues have similar MR relaxation behavior, inconsistency in rating among experts was observed, with fuzz-c-means approaches being slightly preferred over feedforward cascade correlation results. Various facets of both approaches, such as supervised versus unsupervised learning, time complexity, and utility for the diagnostic process, are compared.
NASA Astrophysics Data System (ADS)
Farahi, Maria; Rabbani, Hossein; Talebi, Ardeshir; Sarrafzadeh, Omid; Ensafi, Shahab
2015-12-01
Visceral Leishmaniasis is a parasitic disease that affects liver, spleen and bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of the Leishman body in the microscopic image taken from bone marrow samples. We utilize morphological and CV level set method to segment Leishman bodies in digital color microscopic images captured from bone marrow samples. Linear contrast stretching method is used for image enhancement and morphological method is applied to determine the parasite regions and wipe up unwanted objects. Modified global and local CV level set methods are proposed for segmentation and a shape based stopping factor is used to hasten the algorithm. Manual segmentation is considered as ground truth to evaluate the proposed method. This method is tested on 28 samples and achieved 10.90% mean of segmentation error for global model and 9.76% for local model.
Vortex nozzle for segmenting and transporting metal chips from turning operations
Bieg, L.F.
1993-04-20
Apparatus for collecting, segmenting and conveying metal chips from machining operations utilizes a compressed gas driven vortex nozzle for receiving the chip and twisting it to cause the chip to segment through the application of torsional forces to the chip. The vortex nozzle is open ended and generally tubular in shape with a converging inlet end, a constant diameter throat section and a diverging exhaust end. Compressed gas is discharged through angled vortex ports in the nozzle throat section to create vortex flow in the nozzle and through an annular inlet at the entrance to the converging inlet end to create suction at the nozzle inlet and cause ambient air to enter the nozzle. The vortex flow in the nozzle causes the metal chip to segment and the segments thus formed to pass out of the discharge end of the nozzle where they are collected, cleaned and compacted as needed.
Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert
2018-05-08
In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.
Multisegment nanowire sensors for the detection of DNA molecules.
Wang, Xu; Ozkan, Cengiz S
2008-02-01
We describe a novel application for detecting specific single strand DNA sequences using multisegment nanowires via a straightforward surface functionalization method. Nanowires comprising CdTe-Au-CdTe segments are fabricated using electrochemical deposition, and electrical characterization indicates a p-type behavior for the multisegment nanostructures, in a back-to-back Schottky diode configuration. Such nanostructures modified with thiol-terminated probe DNA fragments could function as high fidelity sensors for biomolecules at very low concentration. The gold segment is utilized for functionalization and binding of single strand DNA (ssDNA) fragments while the CdTe segments at both ends serve to modulate the equilibrium Fermi level of the heterojunction device upon hybridization of the complementary DNA fragments (cDNA) to the ssDNA over the Au segment. Employing such multisegment nanowires could lead to the fabrication more sophisticated and high multispecificity biosensors via selective functionalization of individual segments for biowarfare sensing and medical diagnostics applications.
Inferring the most probable maps of underground utilities using Bayesian mapping model
NASA Astrophysics Data System (ADS)
Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony
2018-03-01
Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.
Kim, Ki-Tack; Lee, Sang-Hun; Suk, Kyung-Soo; Lee, Jung-Hee; Jeong, Bi-O
2010-06-01
The purpose of this study was to analyze the biomechanical effects of three different constrained types of an artificial disc on the implanted and adjacent segments in the lumbar spine using a finite element model (FEM). The created intact model was validated by comparing the flexion-extension response without pre-load with the corresponding results obtained from the published experimental studies. The validated intact lumbar model was tested after implantation of three artificial discs at L4-5. Each implanted model was subjected to a combination of 400 N follower load and 5 Nm of flexion/extension moments. ABAQUS version 6.5 (ABAQUS Inc., Providence, RI, USA) and FEMAP version 8.20 (Electronic Data Systems Corp., Plano, TX, USA) were used for meshing and analysis of geometry of the intact and implanted models. Under the flexion load, the intersegmental rotation angles of all the implanted models were similar to that of the intact model, but under the extension load, the values were greater than that of the intact model. The facet contact loads of three implanted models were greater than the loads observed with the intact model. Under the flexion load, three types of the implanted model at the L4-5 level showed the intersegmental rotation angle similar to the one measured with the intact model. Under the extension load, all of the artificial disc implanted models demonstrated an increased extension rotational angle at the operated level (L4-5), resulting in an increase under the facet contact load when compared with the adjacent segments. The increased facet load may lead to facet degeneration.
Buhimschi, Catalin S; Buhimschi, Irina A; Wehrum, Mark J; Molaskey-Jones, Sherry; Sfakianaki, Anna K; Pettker, Christian M; Thung, Stephen; Campbell, Katherine H; Dulay, Antonette T; Funai, Edmund F; Bahtiyar, Mert O
2011-10-01
To test the hypothesis that myometrial thickness predicts the success of external cephalic version. Abdominal ultrasonographic scans were performed in 114 consecutive pregnant women with breech singletons before an external cephalic version maneuver. Myometrial thickness was measured by a standardized protocol at three sites: the lower segment, midanterior wall, and the fundal uterine wall. Independent variables analyzed in conjunction with myometrial thickness were: maternal age, parity, body mass index, abdominal wall thickness, estimated fetal weight, amniotic fluid index, placental thickness and location, fetal spine position, breech type, and delivery outcomes such as final mode of delivery and birth weight. Successful version was associated with a thicker ultrasonographic fundal myometrium (unsuccessful: 6.7 [5.5-8.4] compared with successful: 7.4 [6.6-9.7] mm, P=.037). Multivariate regression analysis showed that increased fundal myometrial thickness, high amniotic fluid index, and nonfrank breech presentation were the strongest independent predictors of external cephalic version success (P<.001). A fundal myometrial thickness greater than 6.75 mm and an amniotic fluid index greater than 12 cm were each associated with successful external cephalic versions (fundal myometrial thickness: odds ratio [OR] 2.4, 95% confidence interval [CI] 1.1-5.2, P=.029; amniotic fluid index: OR 2.8, 95% CI 1.3-6.0, P=.008). Combining the two variables resulted in an absolute risk reduction for a failed version of 27.6% (95% CI 7.1-48.1) and a number needed to treat of four (95% CI 2.1-14.2). Fundal myometrial thickness and amniotic fluid index contribute to success of external cephalic version and their evaluation can be easily incorporated in algorithms before the procedure. III.
Nett Technologies’ BlueMAX 100 version A Urea-Based SCR System utilizes a zeolite catalyst coating on a cordierite honeycomb substrate for heavy-duty diesel nonroad engines for use with commercial ultra-low–sulfur diesel fuel. This environmental technology verification (ETV) repo...
Automatic lumen segmentation in IVOCT images using binary morphological reconstruction
2013-01-01
Background Atherosclerosis causes millions of deaths, annually yielding billions in expenses round the world. Intravascular Optical Coherence Tomography (IVOCT) is a medical imaging modality, which displays high resolution images of coronary cross-section. Nonetheless, quantitative information can only be obtained with segmentation; consequently, more adequate diagnostics, therapies and interventions can be provided. Since it is a relatively new modality, many different segmentation methods, available in the literature for other modalities, could be successfully applied to IVOCT images, improving accuracies and uses. Method An automatic lumen segmentation approach, based on Wavelet Transform and Mathematical Morphology, is presented. The methodology is divided into three main parts. First, the preprocessing stage attenuates and enhances undesirable and important information, respectively. Second, in the feature extraction block, wavelet is associated with an adapted version of Otsu threshold; hence, tissue information is discriminated and binarized. Finally, binary morphological reconstruction improves the binary information and constructs the binary lumen object. Results The evaluation was carried out by segmenting 290 challenging images from human and pig coronaries, and rabbit iliac arteries; the outcomes were compared with the gold standards made by experts. The resultant accuracy was obtained: True Positive (%) = 99.29 ± 2.96, False Positive (%) = 3.69 ± 2.88, False Negative (%) = 0.71 ± 2.96, Max False Positive Distance (mm) = 0.1 ± 0.07, Max False Negative Distance (mm) = 0.06 ± 0.1. Conclusions In conclusion, by segmenting a number of IVOCT images with various features, the proposed technique showed to be robust and more accurate than published studies; in addition, the method is completely automatic, providing a new tool for IVOCT segmentation. PMID:23937790
Evolving geometrical heterogeneities of fault trace data
NASA Astrophysics Data System (ADS)
Wechsler, Neta; Ben-Zion, Yehuda; Christofferson, Shari
2010-08-01
We perform a systematic comparative analysis of geometrical fault zone heterogeneities using derived measures from digitized fault maps that are not very sensitive to mapping resolution. We employ the digital GIS map of California faults (version 2.0) and analyse the surface traces of active strike-slip fault zones with evidence of Quaternary and historic movements. Each fault zone is broken into segments that are defined as a continuous length of fault bounded by changes of angle larger than 1°. Measurements of the orientations and lengths of fault zone segments are used to calculate the mean direction and misalignment of each fault zone from the local plate motion direction, and to define several quantities that represent the fault zone disorder. These include circular standard deviation and circular standard error of segments, orientation of long and short segments with respect to the mean direction, and normal separation distances of fault segments. We examine the correlations between various calculated parameters of fault zone disorder and the following three potential controlling variables: cumulative slip, slip rate and fault zone misalignment from the plate motion direction. The analysis indicates that the circular standard deviation and circular standard error of segments decrease overall with increasing cumulative slip and increasing slip rate of the fault zones. The results imply that the circular standard deviation and error, quantifying the range or dispersion in the data, provide effective measures of the fault zone disorder, and that the cumulative slip and slip rate (or more generally slip rate normalized by healing rate) represent the fault zone maturity. The fault zone misalignment from plate motion direction does not seem to play a major role in controlling the fault trace heterogeneities. The frequency-size statistics of fault segment lengths can be fitted well by an exponential function over the entire range of observations.
Kushibar, Kaisar; Valverde, Sergi; González-Villà, Sandra; Bernal, Jose; Cabezas, Mariano; Oliver, Arnau; Lladó, Xavier
2018-06-15
Sub-cortical brain structure segmentation in Magnetic Resonance Images (MRI) has attracted the interest of the research community for a long time as morphological changes in these structures are related to different neurodegenerative disorders. However, manual segmentation of these structures can be tedious and prone to variability, highlighting the need for robust automated segmentation methods. In this paper, we present a novel convolutional neural network based approach for accurate segmentation of the sub-cortical brain structures that combines both convolutional and prior spatial features for improving the segmentation accuracy. In order to increase the accuracy of the automated segmentation, we propose to train the network using a restricted sample selection to force the network to learn the most difficult parts of the structures. We evaluate the accuracy of the proposed method on the public MICCAI 2012 challenge and IBSR 18 datasets, comparing it with different traditional and deep learning state-of-the-art methods. On the MICCAI 2012 dataset, our method shows an excellent performance comparable to the best participant strategy on the challenge, while performing significantly better than state-of-the-art techniques such as FreeSurfer and FIRST. On the IBSR 18 dataset, our method also exhibits a significant increase in the performance with respect to not only FreeSurfer and FIRST, but also comparable or better results than other recent deep learning approaches. Moreover, our experiments show that both the addition of the spatial priors and the restricted sampling strategy have a significant effect on the accuracy of the proposed method. In order to encourage the reproducibility and the use of the proposed method, a public version of our approach is available to download for the neuroimaging community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
High speed multiwire photon camera
NASA Technical Reports Server (NTRS)
Lacy, Jeffrey L. (Inventor)
1991-01-01
An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occuring during the readout window.
High speed multiwire photon camera
NASA Technical Reports Server (NTRS)
Lacy, Jeffrey L. (Inventor)
1989-01-01
An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occurring during the readout window.
Translation and validation of the Malay version of the Stroke Knowledge Test.
Sowtali, Siti Noorkhairina; Yusoff, Dariah Mohd; Harith, Sakinah; Mohamed, Monniaty
2016-04-01
To date, there is a lack of published studies on assessment tools to evaluate the effectiveness of stroke education programs. This study developed and validated the Malay language version of the Stroke Knowledge Test research instrument. This study involved translation, validity, and reliability phases. The instrument underwent backward and forward translation of the English version into the Malay language. Nine experts reviewed the content for consistency, clarity, difficulty, and suitability for inclusion. Perceived usefulness and utilization were obtained from experts' opinions. Later, face validity assessment was conducted with 10 stroke patients to determine appropriateness of sentences and grammar used. A pilot study was conducted with 41 stroke patients to determine the item analysis and reliability of the translated instrument using the Kuder Richardson 20 or Cronbach's alpha. The final Malay version Stroke Knowledge Test included 20 items with good content coverage, acceptable item properties, and positive expert review ratings. Psychometric investigations suggest that Malay version Stroke Knowledge Test had moderate reliability with Kuder Richardson 20 or Cronbach's alpha of 0.58. Improvement is required for Stroke Knowledge Test items with unacceptable difficulty indices. Overall, the average rating of perceived usefulness and perceived utility of the instruments were both 72.7%, suggesting that reviewers were likely to use the instruments in their facilities. Malay version Stroke Knowledge Test was a valid and reliable tool to assess educational needs and to evaluate stroke knowledge among participants of group-based stroke education programs in Malaysia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenquin, U.P.; Stewart, K.B.; Heeb, C.M.
1975-07-01
The principal aim of this neutron cross-section research is to provide the utility industry with a 'standard nuclear data base' that will perform satisfactorily when used for analysis of thermal power reactor systems. EPRI is coordinating its activities with those of the Cross Section Evaluation Working Group (CSEWG), responsible for the development of the Evaluated Nuclear Data File-B (ENDF/B) library, in order to improve the performance of the ENDF/B library in thermal reactors and other applications of interest to the utility industry. Battelle-Northwest (BNW) was commissioned to process the ENDF/B Version-4 data files into a group-constant form for use inmore » the LASER and LEOPARD neutronics codes. Performance information on the library should provide the necessary feedback for improving the next version of the library, and a consistent data base is expected to be useful in intercomparing the versions of the LASER and LEOPARD codes presently being used by different utility groups. This report describes the BNW multi-group libraries and the procedures followed in their preparation and testing. (GRA)« less
Electronic Gaming Machine (EGM) Environments: Market Segments and Risk.
Rockloff, Matthew; Moskovsky, Neda; Thorne, Hannah; Browne, Matthew; Bryden, Gabrielle
2017-12-01
This study used a marketing-research paradigm to explore gamblers' attraction to EGMs based on different elements of the environment. A select set of environmental features was sourced from a prior study (Thorne et al. in J Gambl Issues 2016b), and a discrete choice experiment was conducted through an online survey. Using the same dataset first described by Rockloff et al. (EGM Environments that contribute to excess consumption and harm, 2015), a sample of 245 EGM gamblers were sourced from clubs in Victoria, Australia, and 7516 gamblers from an Australian national online survey-panel. Participants' choices amongst sets of hypothetical gambling environments allowed for an estimation of the implied individual-level utilities for each feature (e.g., general sounds, location, etc.). K-means clustering on these utilities identified four unique market segments for EGM gambling, representing four different types of consumers. The segments were named according to their dominant features: Social, Value, High Roller and Internet. We found that the environments orientated towards the Social and Value segments were most conducive to attracting players with relatively few gambling problems, while the High Roller and Internet-focused environments had greater appeal for players with problems and vulnerabilities. This study has generated new insights into the kinds of gambling environments that are most consistent with safe play.
Airway Tree Segmentation in Serial Block-Face Cryomicrotome Images of Rat Lungs
Bauer, Christian; Krueger, Melissa A.; Lamm, Wayne J.; Smith, Brian J.; Glenny, Robb W.; Beichel, Reinhard R.
2014-01-01
A highly-automated method for the segmentation of airways in serial block-face cryomicrotome images of rat lungs is presented. First, a point inside of the trachea is manually specified. Then, a set of candidate airway centerline points is automatically identified. By utilizing a novel path extraction method, a centerline path between the root of the airway tree and each point in the set of candidate centerline points is obtained. Local disturbances are robustly handled by a novel path extraction approach, which avoids the shortcut problem of standard minimum cost path algorithms. The union of all centerline paths is utilized to generate an initial airway tree structure, and a pruning algorithm is applied to automatically remove erroneous subtrees or branches. Finally, a surface segmentation method is used to obtain the airway lumen. The method was validated on five image volumes of Sprague-Dawley rats. Based on an expert-generated independent standard, an assessment of airway identification and lumen segmentation performance was conducted. The average of airway detection sensitivity was 87.4% with a 95% confidence interval (CI) of (84.9, 88.6)%. A plot of sensitivity as a function of airway radius is provided. The combined estimate of airway detection specificity was 100% with a 95% CI of (99.4, 100)%. The average number and diameter of terminal airway branches was 1179 and 159 μm, respectively. Segmentation results include airways up to 31 generations. The regression intercept and slope of airway radius measurements derived from final segmentations were estimated to be 7.22 μm and 1.005, respectively. The developed approach enables quantitative studies of physiology and lung diseases in rats, requiring detailed geometric airway models. PMID:23955692
NASA Astrophysics Data System (ADS)
Gardella, Joseph A.; Mahoney, Christine M.
2004-06-01
While many XPS and SIMS studies of polymers have detected and quantified segregation of low surface energy blocks or components in copolymers and polymer blends [D. Briggs, in: D.R. Clarke, S. Suresh, I.M. Ward (Eds.), Surface Analysis of Polymers by XPS and Static SIMS, Cambridge University Press, Cambridge, 1998 (Chapter 5).], this paper reports ToF-SIMS studies of direct measurement of the segment length distribution at the surface of siloxane copolymers. These data allow insight into the segregation of particular portions of the oligomeric distribution; specifically, in this study, longer PDMS oligomers segregated at the expense of shorter PDMS chains. We have reported XPS analysis of competitive segregation effects for short PDMS chains [Macromolecules 35 (13) (2002) 5256]. In this study, a series of poly(ureaurethane)-poly(dimethylsiloxane) (PUU-PDMS) copolymers have been synthesized containing varying ratios of G-3 and G-9 (G- X describes the average segment length of the PDMS added), while maintaining a constant overall siloxane weight percentage (10, 30, and 60%). These copolymers were utilized as model systems to study the preferential segregation of certain siloxane segment lengths to the surface over others. ToF-SIMS analysis of PUU-PDMS copolymers has yielded high-mass range copolymer fragmentation patterns containing intact PDMS segments. For the first time, this information is utilized to determine PDMS segment length distributions at the copolymer surface as compared to the bulk. The results show that longer siloxane segment lengths are preferentially segregating to the surface over shorter chain lengths. These results also show the importance of ToF-SIMS and mass spectrometry in the development of new materials containing low molecular weight amino-propyl-terminated siloxanes.
Gross, Colin A; Reddy, Chandan K; Dazzo, Frank B
2010-02-01
Quantitative microscopy and digital image analysis are underutilized in microbial ecology largely because of the laborious task to segment foreground object pixels from background, especially in complex color micrographs of environmental samples. In this paper, we describe an improved computing technology developed to alleviate this limitation. The system's uniqueness is its ability to edit digital images accurately when presented with the difficult yet commonplace challenge of removing background pixels whose three-dimensional color space overlaps the range that defines foreground objects. Image segmentation is accomplished by utilizing algorithms that address color and spatial relationships of user-selected foreground object pixels. Performance of the color segmentation algorithm evaluated on 26 complex micrographs at single pixel resolution had an overall pixel classification accuracy of 99+%. Several applications illustrate how this improved computing technology can successfully resolve numerous challenges of complex color segmentation in order to produce images from which quantitative information can be accurately extracted, thereby gain new perspectives on the in situ ecology of microorganisms. Examples include improvements in the quantitative analysis of (1) microbial abundance and phylotype diversity of single cells classified by their discriminating color within heterogeneous communities, (2) cell viability, (3) spatial relationships and intensity of bacterial gene expression involved in cellular communication between individual cells within rhizoplane biofilms, and (4) biofilm ecophysiology based on ribotype-differentiated radioactive substrate utilization. The stand-alone executable file plus user manual and tutorial images for this color segmentation computing application are freely available at http://cme.msu.edu/cmeias/ . This improved computing technology opens new opportunities of imaging applications where discriminating colors really matter most, thereby strengthening quantitative microscopy-based approaches to advance microbial ecology in situ at individual single-cell resolution.
Katsibardi, Katerina; Braoudaki, Maria; Papathanasiou, Chrissa; Karamolegou, Kalliopi; Tzortzatou-Stathopoulou, Fotini
2011-09-01
We analyzed the CDR3 region of 80 children with B-cell acute lymphoblastic leukemia (B-ALL) using the ImMunoGeneTics Information System and JOINSOLVER. In total, 108 IGH@ rearrangements were analyzed. Most of them (75.3%) were non-productive. IGHV@ segments proximal to IGHD-IGHJ@ were preferentially rearranged (45.3%). Increased utilization of IGHV3 segments IGHV3-13 (11.3%) and IGHV3-15 (9.3%), IGHD3 (30.5%), and IGHJ4 (34%) was noted. In pro-B ALL more frequent were IGHV3-11 (33.3%) and IGHV6-1 (33.3%), IGHD2-21 (50%), IGHJ4 (50%), and IGHJ6 (50%) segments. Shorter CDR3 length was observed in IGHV@6, IGHD7, and IGHJ1 segments, whereas increased CDR3 length was related to IGHV3, IGHD2, and IGHJ4 segments. Increased risk of relapse was found in patients with productive sequences. Specifically, the relapse-free survival rate at 5 years in patients with productive sequences at diagnosis was 75% (standard error [SE] ±9%), whereas in patients with non-productive sequences it was 97% (SE ±1.92%) (p-value =0.0264). Monoclonality and oligoclonality were identified in 81.2% and 18.75% cases at diagnosis, respectively. Sequence analysis revealed IGHV@ to IGHDJ joining only in 6.6% cases with oligoclonality. The majority (75%) of relapsed patients had monoclonal IGH@ rearrangements. The preferential utilization of IGHV@ segments proximal to IGHDJ depended on their location on the IGHV@ locus. Molecular mechanisms occurring during IGH@ rearrangement might play an essential role in childhood ALL prognosis. In our study, the productivity of the rearranged sequences at diagnosis proved to be a significant prognostic factor.
NASA Astrophysics Data System (ADS)
Rundle, J.; Rundle, P.; Donnellan, A.; Li, P.
2003-12-01
We consider the problem of the complex dynamics of earthquake fault systems, and whether numerical simulations can be used to define an ensemble forecasting technology similar to that used in weather and climate research. To effectively carry out such a program, we need 1) a topological realistic model to simulate the fault system; 2) data sets to constrain the model parameters through a systematic program of data assimilation; 3) a computational technology making use of modern paradigms of high performance and parallel computing systems; and 4) software to visualize and analyze the results. In particular, we focus attention of a new version of our code Virtual California (version 2001) in which we model all of the major strike slip faults extending throughout California, from the Mexico-California border to the Mendocino Triple Junction. We use the historic data set of earthquakes larger than magnitude M > 6 to define the frictional properties of all 654 fault segments (degrees of freedom) in the model. Previous versions of Virtual California had used only 215 fault segments to model the strike slip faults in southern California. To compute the dynamics and the associated surface deformation, we use message passing as implemented in the MPICH standard distribution on a small Beowulf cluster consisting of 10 cpus. We are also planning to run the code on significantly larger machines so that we can begin to examine much finer spatial scales of resolution, and to assess scaling properties of the code. We present results of simulations both as static images and as mpeg movies, so that the dynamical aspects of the computation can be assessed by the viewer. We also compute a variety of statistics from the simulations, including magnitude-frequency relations, and compare these with data from real fault systems.
Spectral of electrocardiographic RR intervals to indicate atrial fibrillation
NASA Astrophysics Data System (ADS)
Nuryani, Nuryani; Satrio Nugroho, Anto
2017-11-01
Atrial fibrillation is a serious heart diseases, which is associated on the risk of death, and thus an early detection of atrial fibrillation is necessary. We have investigated spectral pattern of electrocardiogram in relation to atrial fibrillation. The utilized feature of electrocardiogram is RR interval. RR interval is the time interval between a two-consecutive R peaks. A series of RR intervals in a time segment is converted to a signal with a frequency domain. The frequency components are investigated to find the components which significantly associate to atrial fibrillation. A segment is defined as atrial fibrillation or normal segments by considering a defined number of atrial fibrillation RR in the segment. Using clinical data of 23 patients with atrial fibrillation, we find that the frequency components could be used to indicate atrial fibrillation.
Best Merge Region Growing Segmentation with Integrated Non-Adjacent Region Object Aggregation
NASA Technical Reports Server (NTRS)
Tilton, James C.; Tarabalka, Yuliya; Montesano, Paul M.; Gofman, Emanuel
2012-01-01
Best merge region growing normally produces segmentations with closed connected region objects. Recognizing that spectrally similar objects often appear in spatially separate locations, we present an approach for tightly integrating best merge region growing with non-adjacent region object aggregation, which we call Hierarchical Segmentation or HSeg. However, the original implementation of non-adjacent region object aggregation in HSeg required excessive computing time even for moderately sized images because of the required intercomparison of each region with all other regions. This problem was previously addressed by a recursive approximation of HSeg, called RHSeg. In this paper we introduce a refined implementation of non-adjacent region object aggregation in HSeg that reduces the computational requirements of HSeg without resorting to the recursive approximation. In this refinement, HSeg s region inter-comparisons among non-adjacent regions are limited to regions of a dynamically determined minimum size. We show that this refined version of HSeg can process moderately sized images in about the same amount of time as RHSeg incorporating the original HSeg. Nonetheless, RHSeg is still required for processing very large images due to its lower computer memory requirements and amenability to parallel processing. We then note a limitation of RHSeg with the original HSeg for high spatial resolution images, and show how incorporating the refined HSeg into RHSeg overcomes this limitation. The quality of the image segmentations produced by the refined HSeg is then compared with other available best merge segmentation approaches. Finally, we comment on the unique nature of the hierarchical segmentations produced by HSeg.
NASA Astrophysics Data System (ADS)
Rueda, Sylvia; Udupa, Jayaram K.
2011-03-01
Landmark based statistical object modeling techniques, such as Active Shape Model (ASM), have proven useful in medical image analysis. Identification of the same homologous set of points in a training set of object shapes is the most crucial step in ASM, which has encountered challenges such as (C1) defining and characterizing landmarks; (C2) ensuring homology; (C3) generalizing to n > 2 dimensions; (C4) achieving practical computations. In this paper, we propose a novel global-to-local strategy that attempts to address C3 and C4 directly and works in Rn. The 2D version starts from two initial corresponding points determined in all training shapes via a method α, and subsequently by subdividing the shapes into connected boundary segments by a line determined by these points. A shape analysis method β is applied on each segment to determine a landmark on the segment. This point introduces more pairs of points, the lines defined by which are used to further subdivide the boundary segments. This recursive boundary subdivision (RBS) process continues simultaneously on all training shapes, maintaining synchrony of the level of recursion, and thereby keeping correspondence among generated points automatically by the correspondence of the homologous shape segments in all training shapes. The process terminates when no subdividing lines are left to be considered that indicate (as per method β) that a point can be selected on the associated segment. Examples of α and β are presented based on (a) distance; (b) Principal Component Analysis (PCA); and (c) the novel concept of virtual landmarks.
Structural Implications of Fluorescence Quenching in the Shaker K+ Channel
Cha, Albert; Bezanilla, Francisco
1998-01-01
When attached to specific sites near the S4 segment of the nonconducting (W434F) Shaker potassium channel, the fluorescent probe tetramethylrhodamine maleimide undergoes voltage-dependent changes in intensity that correlate with the movement of the voltage sensor (Mannuzzu, L.M., M.M. Moronne, and E.Y. Isacoff. 1996. Science. 271:213–216; Cha, A., and F. Bezanilla. 1997. Neuron. 19:1127–1140). The characteristics of this voltage-dependent fluorescence quenching are different in a conducting version of the channel with a different pore substitution (T449Y). Blocking the pore of the T449Y construct with either tetraethylammonium or agitoxin removes a fluorescence component that correlates with the voltage dependence but not the kinetics of ionic activation. This pore-mediated modulation of the fluorescence quenching near the S4 segment suggests that the fluorophore is affected by the state of the external pore. In addition, this modulation may reflect conformational changes associated with channel opening that are prevented by tetraethylammonium or agitoxin. Studies of pH titration, collisional quenchers, and anisotropy indicate that fluorophores attached to residues near the S4 segment are constrained by a nearby region of protein. The mechanism of fluorescence quenching near the S4 segment does not involve either reorientation of the fluorophore or a voltage-dependent excitation shift and is different from the quenching mechanism observed at a site near the S2 segment. Taken together, these results suggest that the extracellular portion of the S4 segment resides in an aqueous protein vestibule and is influenced by the state of the external pore. PMID:9758859
ADVANCED UTILITY SIMULATION MODEL, DESCRIPTION OF THE NATIONAL LOOP (VERSION 3.0)
The report is one of 11 in a series describing the initial development of the Advanced Utility Simulation Model (AUSM) by the Universities Research Group on Energy (URGE) and its continued development by the Science Applications International Corporation (SAIC) research team. The...
NASA Astrophysics Data System (ADS)
Peleshko, V. A.
2016-06-01
The deviator constitutive relation of the proposed theory of plasticity has a three-term form (the stress, stress rate, and strain rate vectors formed from the deviators are collinear) and, in the specialized (applied) version, in addition to the simple loading function, contains four dimensionless constants of the material determined from experiments along a two-link strain trajectory with an orthogonal break. The proposed simple mechanism is used to calculate the constants of themodel for four metallic materials that significantly differ in the composition and in the mechanical properties; the obtained constants do not deviate much from their average values (over the four materials). The latter are taken as universal constants in the engineering version of the model, which thus requires only one basic experiment, i. e., a simple loading test. If the material exhibits the strengthening property in cyclic circular deformation, then the model contains an additional constant determined from the experiment along a strain trajectory of this type. (In the engineering version of the model, the cyclic strengthening effect is not taken into account, which imposes a certain upper bound on the difference between the length of the strain trajectory arc and the module of the strain vector.) We present the results of model verification using the experimental data available in the literature about the combined loading along two- and multi-link strain trajectories with various lengths of links and angles of breaks, with plane curvilinear segments of various constant and variable curvature, and with three-dimensional helical segments of various curvature and twist. (All in all, we use more than 80 strain programs; the materials are low- andmedium-carbon steels, brass, and stainless steel.) These results prove that the model can be used to describe the process of arbitrary active (in the sense of nonnegative capacity of the shear) combine loading and final unloading of originally quasi-isotropic elastoplastic materials. In practical calculations, in the absence of experimental data about the properties of a material under combined loading, the use of the engineering version of the model is quite acceptable. The simple identification, wide verifiability, and the availability of a software implementation of the method for solving initial-boundary value problems permit treating the proposed theory as an applied theory.
1995-06-08
Scientists at Marshall's Adaptive Optics Lab demonstrate the Wave Front Sensor alignment using the Phased Array Mirror Extendible Large Aperture (PAMELA) optics adjustment. The primary objective of the PAMELA project is to develop methods for aligning and controlling adaptive optics segmented mirror systems. These systems can be used to acquire or project light energy. The Next Generation Space Telescope is an example of an energy acquisition system that will employ segmented mirrors. Light projection systems can also be used for power beaming and orbital debris removal. All segmented optical systems must be adjusted to provide maximum performance. PAMELA is an on going project that NASA is utilizing to investigate various methods for maximizing system performance.
Real-time myocardium segmentation for the assessment of cardiac function variation
NASA Astrophysics Data System (ADS)
Zoehrer, Fabian; Huellebrand, Markus; Chitiboi, Teodora; Oechtering, Thekla; Sieren, Malte; Frahm, Jens; Hahn, Horst K.; Hennemuth, Anja
2017-03-01
Recent developments in MRI enable the acquisition of image sequences with high spatio-temporal resolution. Cardiac motion can be captured without gating and triggering. Image size and contrast relations differ from conventional cardiac MRI cine sequences requiring new adapted analysis methods. We suggest a novel segmentation approach utilizing contrast invariant polar scanning techniques. It has been tested with 20 datasets of arrhythmia patients. The results do not differ significantly more between automatic and manual segmentations than between observers. This indicates that the presented solution could enable clinical applications of real-time MRI for the examination of arrhythmic cardiac motion in the future.
Quantifying and visualizing variations in sets of images using continuous linear optimal transport
NASA Astrophysics Data System (ADS)
Kolouri, Soheil; Rohde, Gustavo K.
2014-03-01
Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Chuan, E-mail: chuan@umich.edu; Chan, Heang-
Purpose: The authors are developing an automated method to identify the best-quality coronary arterial segment from multiple-phase coronary CT angiography (cCTA) acquisitions, which may be used by either interpreting physicians or computer-aided detection systems to optimally and efficiently utilize the diagnostic information available in multiple-phase cCTA for the detection of coronary artery disease. Methods: After initialization with a manually identified seed point, each coronary artery tree is automatically extracted from multiple cCTA phases using our multiscale coronary artery response enhancement and 3D rolling balloon region growing vessel segmentation and tracking method. The coronary artery trees from multiple phases are thenmore » aligned by a global registration using an affine transformation with quadratic terms and nonlinear simplex optimization, followed by a local registration using a cubic B-spline method with fast localized optimization. The corresponding coronary arteries among the available phases are identified using a recursive coronary segment matching method. Each of the identified vessel segments is transformed by the curved planar reformation (CPR) method. Four features are extracted from each corresponding segment as quality indicators in the original computed tomography volume and the straightened CPR volume, and each quality indicator is used as a voting classifier for the arterial segment. A weighted voting ensemble (WVE) classifier is designed to combine the votes of the four voting classifiers for each corresponding segment. The segment with the highest WVE vote is then selected as the best-quality segment. In this study, the training and test sets consisted of 6 and 20 cCTA cases, respectively, each with 6 phases, containing a total of 156 cCTA volumes and 312 coronary artery trees. An observer preference study was also conducted with one expert cardiothoracic radiologist and four nonradiologist readers to visually rank vessel segment quality. The performance of our automated method was evaluated by comparing the automatically identified best-quality segments identified by the computer to those selected by the observers. Results: For the 20 test cases, 254 groups of corresponding vessel segments were identified after multiple phase registration and recursive matching. The AI-BQ segments agreed with the radiologist’s top 2 ranked segments in 78.3% of the 254 groups (Cohen’s kappa 0.60), and with the 4 nonradiologist observers in 76.8%, 84.3%, 83.9%, and 85.8% of the 254 groups. In addition, 89.4% of the AI-BQ segments agreed with at least two observers’ top 2 rankings, and 96.5% agreed with at least one observer’s top 2 rankings. In comparison, agreement between the four observers’ top ranked segment and the radiologist’s top 2 ranked segments were 79.9%, 80.7%, 82.3%, and 76.8%, respectively, with kappa values ranging from 0.56 to 0.68. Conclusions: The performance of our automated method for selecting the best-quality coronary segments from a multiple-phase cCTA acquisition was comparable to the selection made by human observers. This study demonstrates the potential usefulness of the automated method in clinical practice, enabling interpreting physicians to fully utilize the best available information in cCTA for diagnosis of coronary disease, without requiring manual search through the multiple phases and minimizing the variability in image phase selection for evaluation of coronary artery segments across the diversity of human readers with variations in expertise.« less
De Winter, Joeri; Wagemans, Johan
2008-01-01
Attneave (1954 Psychological Review 61 183-193) demonstrated that a line drawing of a sleeping cat can still be identified when the smoothly curved contours are replaced by straight-line segments connecting the positive maxima and negative minima of contour curvature. Using the set of line drawings by Snodgrass and Vanderwart (1980 Journal of Experimental Psychology: Human Learning and Memory 6 174-215) we made outline versions (with known curvature values along the contour) that can still be identified and that can be used to test Attneave's demonstration more systematically and more thoroughly. In five experiments (with 444 subjects in total), we tested identifiability of straight-line versions of 184 stimuli with different selections of points to be connected (using 24 to 28 subjects per stimulus per condition). Straight-line versions connecting curvature extrema were easier to identify than those based on inflections (where curvature changes sign), and those connecting salient points (determined by 161 independent subjects) were easier than those connecting midpoints. However, identification varied considerably between objects: some were almost always identifiable and others almost never, regardless of the selection criterion, whereas identifiability depended on the specific shape attributes preserved in the straight-line version of the outline in other objects. Results are discussed in relation to Attneave's original hypotheses as well as in the light of more recent theories on shape perception and object identification.
A two-stage method for microcalcification cluster segmentation in mammography by deformable models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.
Purpose: Segmentation of microcalcification (MC) clusters in x-ray mammography is a difficult task for radiologists. Accurate segmentation is prerequisite for quantitative image analysis of MC clusters and subsequent feature extraction and classification in computer-aided diagnosis schemes. Methods: In this study, a two-stage semiautomated segmentation method of MC clusters is investigated. The first stage is targeted to accurate and time efficient segmentation of the majority of the particles of a MC cluster, by means of a level set method. The second stage is targeted to shape refinement of selected individual MCs, by means of an active contour model. Both methods aremore » applied in the framework of a rich scale-space representation, provided by the wavelet transform at integer scales. Segmentation reliability of the proposed method in terms of inter and intraobserver agreements was evaluated in a case sample of 80 MC clusters originating from the digital database for screening mammography, corresponding to 4 morphology types (punctate: 22, fine linear branching: 16, pleomorphic: 18, and amorphous: 24) of MC clusters, assessing radiologists’ segmentations quantitatively by two distance metrics (Hausdorff distance—HDIST{sub cluster}, average of minimum distance—AMINDIST{sub cluster}) and the area overlap measure (AOM{sub cluster}). The effect of the proposed segmentation method on MC cluster characterization accuracy was evaluated in a case sample of 162 pleomorphic MC clusters (72 malignant and 90 benign). Ten MC cluster features, targeted to capture morphologic properties of individual MCs in a cluster (area, major length, perimeter, compactness, and spread), were extracted and a correlation-based feature selection method yielded a feature subset to feed in a support vector machine classifier. Classification performance of the MC cluster features was estimated by means of the area under receiver operating characteristic curve (Az ± Standard Error) utilizing tenfold cross-validation methodology. A previously developed B-spline active rays segmentation method was also considered for comparison purposes. Results: Interobserver and intraobserver segmentation agreements (median and [25%, 75%] quartile range) were substantial with respect to the distance metrics HDIST{sub cluster} (2.3 [1.8, 2.9] and 2.5 [2.1, 3.2] pixels) and AMINDIST{sub cluster} (0.8 [0.6, 1.0] and 1.0 [0.8, 1.2] pixels), while moderate with respect to AOM{sub cluster} (0.64 [0.55, 0.71] and 0.59 [0.52, 0.66]). The proposed segmentation method outperformed (0.80 ± 0.04) statistically significantly (Mann-Whitney U-test, p < 0.05) the B-spline active rays segmentation method (0.69 ± 0.04), suggesting the significance of the proposed semiautomated method. Conclusions: Results indicate a reliable semiautomated segmentation method for MC clusters offered by deformable models, which could be utilized in MC cluster quantitative image analysis.« less
1993-01-01
elements in the transaction set. A convention is usually developed before any computer EDI sys - tems development work and serves as a design document when...Segment For If 1 Comment 1 etc. Nabe 1: This ba noe. NOW Sae 0 am U’ of to (nmbered. Commer A: Thim 6a manuww COMMENTS we not put am fti 1-dw-d 0
Passive Fingerprinting Of Computer Network Reconnaissance Tools
2009-09-01
v6 for version 6 MITM : Man-In-The-Middle Attack MSS: Maximum Segment Size NOP: No Operation Performed NPS: Naval Postgraduate School OS...specific, or man-in-the- middle ( MITM ) attacks. Depending on the attacker’s position to access the targeted network, the attacker may be able to...identification numbers. Both are ordinarily supposed to be initialized as a random number to make it difficult for an attacker to perform an injection MITM
NASA Ames potential flow analysis (POTFAN) geometry program (POTGEM), version 1
NASA Technical Reports Server (NTRS)
Medan, R. T.; Bullock, R. B.
1976-01-01
A computer program known as POTGEM is reported which has been developed as an independent segment of a three-dimensional linearized, potential flow analysis system and which is used to generate a panel point description of arbitrary, three-dimensional bodies from convenient engineering descriptions consisting of equations and/or tables. Due to the independent, modular nature of the program, it may be used to generate corner points for other computer programs.
2015-07-02
defense acquisitions may depend less on the extent to which provisions of the bill make substantive changes to acquisitions...1 Because the House Armed Services Committee’s focus on small business predates the current reform effort, and because small... business provisions also affect only a specific segment of the industrial base, not the overall acquisition system, such sections were excluded from
Psychometric Qualities of the UCLA Loneliness Scale-Version 3 as Applied in a Turkish Culture
ERIC Educational Resources Information Center
Durak, Mithat; Senol-Durak, Emre
2010-01-01
The University of California, Los Angeles, Loneliness Scale-Version 3 (UCLA LS3) is the most frequently used loneliness assessment tool. This study aimed to examine the psychometric properties of the UCLA LS3 by utilizing two separate and independent samples: Turkish university students (n = 481) and elderly (n = 284). The results demonstrate that…
ERIC Educational Resources Information Center
Slappendel, Geerte; Mandy, William; van der Ende, Jan; Verhulst, Frank C.; van der Sijde, Ad; Duvekot, Jorieke; Skuse, David; Greaves-Lord, Kirstin
2016-01-01
The Developmental Diagnostic Dimensional Interview-short version (3Di-sv) provides a brief standardized parental interview for diagnosing autism spectrum disorder (ASD). This study explored its validity, and compatibility with DSM-5 ASD. 3Di-sv classifications showed good sensitivity but low specificity when compared to ADOS-2-confirmed clinical…
ERIC Educational Resources Information Center
Nese, Joseph F. T.; Lai, Cheng-Fei; Anderson, Daniel; Jamgochian, Elisa M.; Kamata, Akihito; Saez, Leilani; Park, Bitnara J.; Alonzo, Julie; Tindal, Gerald
2010-01-01
In this technical report, data are presented on the practical utility, reliability, and validity of the easyCBM[R] mathematics (2009-2010 version) measures for students in grades 3-8 within four districts in two states. Analyses include: minimum acceptable within-year growth; minimum acceptable year-end benchmark performance; internal and…
ERIC Educational Resources Information Center
Gomiero, Tiziano; Croce, Luigi; Grossi, Enzo; Luc, De Vreese; Buscema, Massimo; Mantesso, Ulrico; De Bastiani, Elisa
2011-01-01
The aim of this paper is to present a shortened version of the SIS (support intensity scale) obtained by the application of mathematical models and instruments, adopting special algorithms based on the most recent developments in artificial adaptive systems. All the variables of SIS applied to 1,052 subjects with ID (intellectual disabilities)…
Validity Evidence for the Chinese Version Classroom Appraisal of Resources and Demands (CARD)
ERIC Educational Resources Information Center
Zhang, Juan; Wang, Chuang; Lambert, Richard; Wu, Chenggang; Wen, Hongbo
2017-01-01
The Classroom Appraisal of Resources and Demands (CARD) was designed to evaluate teacher stress based on subjective evaluations of classroom demands and resources. However, the CARD has been mostly utilized in western countries. The aim of the current study was to provide aspects of the validity of responses to a Chinese version of the CARD that…
2007-07-17
receiving system and NRL’s Automated Processing System (APS) (Martinolich 2005). APS Version 3.4 utilized atmospheric correction algorithms proscribed by... Automated Processing System User’s Guide Version 3.4, edited by N.R. Laboratory. Rabalais, N.N., R.E. Turner, and W.J. Wiseman, Jr. 2002. Hypoxia in the
ERIC Educational Resources Information Center
Collado, Anahi; Risco, Cristina M.; Banducci, Anne N.; Chen, Kevin W.; MacPherson, Laura; Lejuez, Carl W.
2017-01-01
Research indicates that White adolescents tend to engage in greater levels of risk behavior relative to Black adolescents. To better understand these differences, the current study examined real-time changes in risk-taking propensity (RTP). The study utilized the Balloon Analogue Risk Task-Youth Version (BART-Y), a well-validated real-time,…
An Approach for Reducing the Error Rate in Automated Lung Segmentation
Gill, Gurman; Beichel, Reinhard R.
2016-01-01
Robust lung segmentation is challenging, especially when tens of thousands of lung CT scans need to be processed, as required by large multi-center studies. The goal of this work was to develop and assess a method for the fusion of segmentation results from two different methods to generate lung segmentations that have a lower failure rate than individual input segmentations. As basis for the fusion approach, lung segmentations generated with a region growing and model-based approach were utilized. The fusion result was generated by comparing input segmentations and selectively combining them using a trained classification system. The method was evaluated on a diverse set of 204 CT scans of normal and diseased lungs. The fusion approach resulted in a Dice coefficient of 0.9855 ± 0.0106 and showed a statistically significant improvement compared to both input segmentation methods. In addition, the failure rate at different segmentation accuracy levels was assessed. For example, when requiring that lung segmentations must have a Dice coefficient of better than 0.97, the fusion approach had a failure rate of 6.13%. In contrast, the failure rate for region growing and model-based methods was 18.14% and 15.69%, respectively. Therefore, the proposed method improves the quality of the lung segmentations, which is important for subsequent quantitative analysis of lungs. Also, to enable a comparison with other methods, results on the LOLA11 challenge test set are reported. PMID:27447897
Wang, Lizhu; Brenden, Travis; Cao, Yong; Seelbach, Paul
2012-11-01
Identifying appropriate spatial scales is critically important for assessing health, attributing data, and guiding management actions for rivers. We describe a process for identifying a three-level hierarchy of spatial scales for Michigan rivers. Additionally, we conduct a variance decomposition of fish occurrence, abundance, and assemblage metric data to evaluate how much observed variability can be explained by the three spatial scales as a gage of their utility for water resources and fisheries management. The process involved the development of geographic information system programs, statistical models, modification by experienced biologists, and simplification to meet the needs of policy makers. Altogether, 28,889 reaches, 6,198 multiple-reach segments, and 11 segment classes were identified from Michigan river networks. The segment scale explained the greatest amount of variation in fish abundance and occurrence, followed by segment class, and reach. Segment scale also explained the greatest amount of variation in 13 of the 19 analyzed fish assemblage metrics, with segment class explaining the greatest amount of variation in the other six fish metrics. Segments appear to be a useful spatial scale/unit for measuring and synthesizing information for managing rivers and streams. Additionally, segment classes provide a useful typology for summarizing the numerous segments into a few categories. Reaches are the foundation for the identification of segments and segment classes and thus are integral elements of the overall spatial scale hierarchy despite reaches not explaining significant variation in fish assemblage data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milostan, Catharina; Levin, Todd; Muehleisen, Ralph T.
Many electric utilities operate energy efficiency incentive programs that encourage increased dissemination and use of energy-efficient (EE) products in their service territories. The programs can be segmented into three broad categories—downstream incentive programs target product end users, midstream programs target product distributors, and upstream programs target product manufacturers. Traditional downstream programs have had difficulty engaging Small Business/Small Portfolio (SBSP) audiences, and an opportunity exists to expand Commercial Midstream Incentive Programs (CMIPs) to reach this market segment instead.
The E3 combustors: Status and challenges. [energy efficient turbofan engines
NASA Technical Reports Server (NTRS)
Sokolowski, D. E.; Rohde, J. E.
1981-01-01
The design, fabrication, and initial testing of energy efficient engine combustors, developed for the next generation of turbofan engines for commercial aircraft, are described. The combustor designs utilize an annular configuration with two zone combustion for low emissions, advanced liners for improved durability, and short, curved-wall, dump prediffusers for compactness. Advanced cooling techniques and segmented construction characterize the advanced liners. Linear segments are made from castable, turbine-type materials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chng, Brenda; Mann, Robert; Department of Physics, University of Waterloo 200 University Avenue West, Waterloo, Ontario N2L 3G1
We construct new solutions of the vacuum Einstein field equations in four dimensions via a solution-generating method utilizing the SL(2,R) symmetry of the reduced Lagrangian. We apply the method to an accelerating version of the Zipoy-Voorhees solution and generate new solutions which we interpret to be the accelerating versions of the Zipoy-Voorhees generalization of the Taub-NUT solution (with Lorentzian signature) and the Zipoy-Voorhees generalization of the Eguchi-Hanson solitons (with Euclidean signature). As an intermediary in the solution-generating process we obtain charged versions of the accelerated Zipoy-Voorhees-like families of solutions. Finally we present the accelerating version of the Taub-NUT solution andmore » discuss its properties.« less
NRC/AMRMC Resident Research Associateship Program
2015-03-01
antimicrobials (and antiseptics) as well as to evaluate the effectiveness of various biofilm dispersal agents utilizing a number of bacterial species as...combat related wounds. 3 Demonstrated the utility of combinations of biofilm dispersal agents and antimicrobials as an alternate therapy for...alone or in combination with antimicrobials ) to reduce infection in contaminated femoral segmental defects. 5 Characterized host responses of
ERIC Educational Resources Information Center
Walton, Katherine M.; Ingersoll, Brooke R.
2016-01-01
Literature on "Thin Slice" ratings indicates that a number of personality characteristics and behaviors can be accurately predicted by ratings of very short segments (<5?min) of behavior. This study examined the utility of Thin Slice ratings of young children with autism spectrum disorder for predicting developmental skills and…
Feng, Yuan; Dong, Fenglin; Xia, Xiaolong; Hu, Chun-Hong; Fan, Qianmin; Hu, Yanle; Gao, Mingyuan; Mutic, Sasa
2017-07-01
Ultrasound (US) imaging has been widely used in breast tumor diagnosis and treatment intervention. Automatic delineation of the tumor is a crucial first step, especially for the computer-aided diagnosis (CAD) and US-guided breast procedure. However, the intrinsic properties of US images such as low contrast and blurry boundaries pose challenges to the automatic segmentation of the breast tumor. Therefore, the purpose of this study is to propose a segmentation algorithm that can contour the breast tumor in US images. To utilize the neighbor information of each pixel, a Hausdorff distance based fuzzy c-means (FCM) method was adopted. The size of the neighbor region was adaptively updated by comparing the mutual information between them. The objective function of the clustering process was updated by a combination of Euclid distance and the adaptively calculated Hausdorff distance. Segmentation results were evaluated by comparing with three experts' manual segmentations. The results were also compared with a kernel-induced distance based FCM with spatial constraints, the method without adaptive region selection, and conventional FCM. Results from segmenting 30 patient images showed the adaptive method had a value of sensitivity, specificity, Jaccard similarity, and Dice coefficient of 93.60 ± 5.33%, 97.83 ± 2.17%, 86.38 ± 5.80%, and 92.58 ± 3.68%, respectively. The region-based metrics of average symmetric surface distance (ASSD), root mean square symmetric distance (RMSD), and maximum symmetric surface distance (MSSD) were 0.03 ± 0.04 mm, 0.04 ± 0.03 mm, and 1.18 ± 1.01 mm, respectively. All the metrics except sensitivity were better than that of the non-adaptive algorithm and the conventional FCM. Only three region-based metrics were better than that of the kernel-induced distance based FCM with spatial constraints. Inclusion of the pixel neighbor information adaptively in segmenting US images improved the segmentation performance. The results demonstrate the potential application of the method in breast tumor CAD and other US-guided procedures. © 2017 American Association of Physicists in Medicine.
Extracting oil palm crown from WorldView-2 satellite image
NASA Astrophysics Data System (ADS)
Korom, A.; Phua, M.-H.; Hirata, Y.; Matsuura, T.
2014-02-01
Oil palm (OP) is the most commercial crop in Malaysia. Estimating the crowns is important for biomass estimation from high resolution satellite (HRS) image. This study examined extraction of individual OP crown from a WorldView-2 image using twofold algorithms, i.e., masking of Non-OP pixels and detection of individual OP crown based on the watershed segmentation of greyscale images. The study site was located in Beluran district, central Sabah, where matured OPs with the age ranging from 15 to 25 years old have been planted. We examined two compound vegetation indices of (NDVI+1)*DVI and NDII for masking non-OP crown areas. Using kappa statistics, an optimal threshold value was set with the highest accuracy at 90.6% for differentiating OP crown areas from Non-OP areas. After the watershed segmentation of OP crown areas with additional post-procedures, about 77% of individual OP crowns were successfully detected in comparison to the manual based delineation. Shape and location of each crown segment was then assessed based on a modified version of the goodness measures of Möller et al which was 0.3, indicating an acceptable CSGM (combined segmentation goodness measures) agreements between the automated and manually delineated crowns (perfect case is '1').
Tam, Roger C; Traboulsee, Anthony; Riddehough, Andrew; Li, David K B
2012-01-01
The change in T 1-hypointense lesion ("black hole") volume is an important marker of pathological progression in multiple sclerosis (MS). Black hole boundaries often have low contrast and are difficult to determine accurately and most (semi-)automated segmentation methods first compute the T 2-hyperintense lesions, which are a superset of the black holes and are typically more distinct, to form a search space for the T 1w lesions. Two main potential sources of measurement noise in longitudinal black hole volume computation are partial volume and variability in the T 2w lesion segmentation. A paired analysis approach is proposed herein that uses registration to equalize partial volume and lesion mask processing to combine T 2w lesion segmentations across time. The scans of 247 MS patients are used to compare a selected black hole computation method with an enhanced version incorporating paired analysis, using rank correlation to a clinical variable (MS functional composite) as the primary outcome measure. The comparison is done at nine different levels of intensity as a previous study suggests that darker black holes may yield stronger correlations. The results demonstrate that paired analysis can strongly improve longitudinal correlation (from -0.148 to -0.303 in this sample) and may produce segmentations that are more sensitive to clinically relevant changes.
Accurate airway segmentation based on intensity structure analysis and graph-cut
NASA Astrophysics Data System (ADS)
Meng, Qier; Kitsaka, Takayuki; Nimura, Yukitaka; Oda, Masahiro; Mori, Kensaku
2016-03-01
This paper presents a novel airway segmentation method based on intensity structure analysis and graph-cut. Airway segmentation is an important step in analyzing chest CT volumes for computerized lung cancer detection, emphysema diagnosis, asthma diagnosis, and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3-D airway tree structure from a CT volume is quite challenging. Several researchers have proposed automated algorithms basically based on region growing and machine learning techniques. However these methods failed to detect the peripheral bronchi branches. They caused a large amount of leakage. This paper presents a novel approach that permits more accurate extraction of complex bronchial airway region. Our method are composed of three steps. First, the Hessian analysis is utilized for enhancing the line-like structure in CT volumes, then a multiscale cavity-enhancement filter is employed to detect the cavity-like structure from the previous enhanced result. In the second step, we utilize the support vector machine (SVM) to construct a classifier for removing the FP regions generated. Finally, the graph-cut algorithm is utilized to connect all of the candidate voxels to form an integrated airway tree. We applied this method to sixteen cases of 3D chest CT volumes. The results showed that the branch detection rate of this method can reach about 77.7% without leaking into the lung parenchyma areas.
The report is one of 11 in a series describing the initial development of the Advanced Utility Simulation Model (AUSM) by the Universities Research Group on Energy (URGE) and its continued development by the Science Applications International Corporation (SAIC) research team. The...
ADVANCED UTILITY SIMULATION MODEL DOCUMENTATION OF SYSTEM DESIGN STATE LEVEL MODEL (VERSION 1.0)
The report is one of 11 in a series describing the initial development of the Advanced Utility Simulation Model (AUSM) by the Universities Research Group on Energy (URGE) and its continued development by the Science Applications International Corporation (SAIC) research team. The...
The report is one of 11 in a series describing the initial development of the Advanced Utility Simulation Model (AUSM) by the Universities Research Group on Energy (URGE) and its continued development by the Science Applications International Corporation (SAIC) research team. The...
ChEMBL web services: streamlining access to drug discovery data and utilities
Davies, Mark; Nowotka, Michał; Papadatos, George; Dedman, Nathan; Gaulton, Anna; Atkinson, Francis; Bellis, Louisa; Overington, John P.
2015-01-01
ChEMBL is now a well-established resource in the fields of drug discovery and medicinal chemistry research. The ChEMBL database curates and stores standardized bioactivity, molecule, target and drug data extracted from multiple sources, including the primary medicinal chemistry literature. Programmatic access to ChEMBL data has been improved by a recent update to the ChEMBL web services (version 2.0.x, https://www.ebi.ac.uk/chembl/api/data/docs), which exposes significantly more data from the underlying database and introduces new functionality. To complement the data-focused services, a utility service (version 1.0.x, https://www.ebi.ac.uk/chembl/api/utils/docs), which provides RESTful access to commonly used cheminformatics methods, has also been concurrently developed. The ChEMBL web services can be used together or independently to build applications and data processing workflows relevant to drug discovery and chemical biology. PMID:25883136
NASA Astrophysics Data System (ADS)
Yin, Y.; Sonka, M.
2010-03-01
A novel method is presented for definition of search lines in a variety of surface segmentation approaches. The method is inspired by properties of electric field direction lines and is applicable to general-purpose n-D shapebased image segmentation tasks. Its utility is demonstrated in graph construction and optimal segmentation of multiple mutually interacting objects. The properties of the electric field-based graph construction guarantee that inter-object graph connecting lines are non-intersecting and inherently covering the entire object-interaction space. When applied to inter-object cross-surface mapping, our approach generates one-to-one and all-to-all vertex correspondent pairs between the regions of mutual interaction. We demonstrate the benefits of the electric field approach in several examples ranging from relatively simple single-surface segmentation to complex multiobject multi-surface segmentation of femur-tibia cartilage. The performance of our approach is demonstrated in 60 MR images from the Osteoarthritis Initiative (OAI), in which our approach achieved a very good performance as judged by surface positioning errors (average of 0.29 and 0.59 mm for signed and unsigned cartilage positioning errors, respectively).
Shi, Feng; Yap, Pew-Thian; Fan, Yong; Cheng, Jie-Zhi; Wald, Lawrence L.; Gerig, Guido; Lin, Weili; Shen, Dinggang
2010-01-01
The acquisition of high quality MR images of neonatal brains is largely hampered by their characteristically small head size and low tissue contrast. As a result, subsequent image processing and analysis, especially for brain tissue segmentation, are often hindered. To overcome this problem, a dedicated phased array neonatal head coil is utilized to improve MR image quality by effectively combing images obtained from 8 coil elements without lengthening data acquisition time. In addition, a subject-specific atlas based tissue segmentation algorithm is specifically developed for the delineation of fine structures in the acquired neonatal brain MR images. The proposed tissue segmentation method first enhances the sheet-like cortical gray matter (GM) structures in neonatal images with a Hessian filter for generation of cortical GM prior. Then, the prior is combined with our neonatal population atlas to form a cortical enhanced hybrid atlas, which we refer to as the subject-specific atlas. Various experiments are conducted to compare the proposed method with manual segmentation results, as well as with additional two population atlas based segmentation methods. Results show that the proposed method is capable of segmenting the neonatal brain with the highest accuracy, compared to other two methods. PMID:20862268
A superpixel-based framework for automatic tumor segmentation on breast DCE-MRI
NASA Astrophysics Data System (ADS)
Yu, Ning; Wu, Jia; Weinstein, Susan P.; Gaonkar, Bilwaj; Keller, Brad M.; Ashraf, Ahmed B.; Jiang, YunQing; Davatzikos, Christos; Conant, Emily F.; Kontos, Despina
2015-03-01
Accurate and efficient automated tumor segmentation in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is highly desirable for computer-aided tumor diagnosis. We propose a novel automatic segmentation framework which incorporates mean-shift smoothing, superpixel-wise classification, pixel-wise graph-cuts partitioning, and morphological refinement. A set of 15 breast DCE-MR images, obtained from the American College of Radiology Imaging Network (ACRIN) 6657 I-SPY trial, were manually segmented to generate tumor masks (as ground truth) and breast masks (as regions of interest). Four state-of-the-art segmentation approaches based on diverse models were also utilized for comparison. Based on five standard evaluation metrics for segmentation, the proposed framework consistently outperformed all other approaches. The performance of the proposed framework was: 1) 0.83 for Dice similarity coefficient, 2) 0.96 for pixel-wise accuracy, 3) 0.72 for VOC score, 4) 0.79 mm for mean absolute difference, and 5) 11.71 mm for maximum Hausdorff distance, which surpassed the second best method (i.e., adaptive geodesic transformation), a semi-automatic algorithm depending on precise initialization. Our results suggest promising potential applications of our segmentation framework in assisting analysis of breast carcinomas.
Significant Advances in the AIRS Science Team Version-6 Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena; Molnar, Gyula
2012-01-01
AIRS/AMSU is the state of the art infrared and microwave atmospheric sounding system flying aboard EOS Aqua. The Goddard DISC has analyzed AIRS/AMSU observations, covering the period September 2002 until the present, using the AIRS Science Team Version-S retrieval algorithm. These products have been used by many researchers to make significant advances in both climate and weather applications. The AIRS Science Team Version-6 Retrieval, which will become operation in mid-20l2, contains many significant theoretical and practical improvements compared to Version-5 which should further enhance the utility of AIRS products for both climate and weather applications. In particular, major changes have been made with regard to the algOrithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the retrieval procedure; 3) compute Outgoing Longwave Radiation; and 4) determine Quality Control. This paper will describe these advances found in the AIRS Version-6 retrieval algorithm and demonstrate the improvement of AIRS Version-6 products compared to those obtained using Version-5,
Nucleus detection using gradient orientation information and linear least squares regression
NASA Astrophysics Data System (ADS)
Kwak, Jin Tae; Hewitt, Stephen M.; Xu, Sheng; Pinto, Peter A.; Wood, Bradford J.
2015-03-01
Computerized histopathology image analysis enables an objective, efficient, and quantitative assessment of digitized histopathology images. Such analysis often requires an accurate and efficient detection and segmentation of histological structures such as glands, cells and nuclei. The segmentation is used to characterize tissue specimens and to determine the disease status or outcomes. The segmentation of nuclei, in particular, is challenging due to the overlapping or clumped nuclei. Here, we propose a nuclei seed detection method for the individual and overlapping nuclei that utilizes the gradient orientation or direction information. The initial nuclei segmentation is provided by a multiview boosting approach. The angle of the gradient orientation is computed and traced for the nuclear boundaries. Taking the first derivative of the angle of the gradient orientation, high concavity points (junctions) are discovered. False junctions are found and removed by adopting a greedy search scheme with the goodness-of-fit statistic in a linear least squares sense. Then, the junctions determine boundary segments. Partial boundary segments belonging to the same nucleus are identified and combined by examining the overlapping area between them. Using the final set of the boundary segments, we generate the list of seeds in tissue images. The method achieved an overall precision of 0.89 and a recall of 0.88 in comparison to the manual segmentation.
Hatipoglu, Nuh; Bilgin, Gokhan
2017-10-01
In many computerized methods for cell detection, segmentation, and classification in digital histopathology that have recently emerged, the task of cell segmentation remains a chief problem for image processing in designing computer-aided diagnosis (CAD) systems. In research and diagnostic studies on cancer, pathologists can use CAD systems as second readers to analyze high-resolution histopathological images. Since cell detection and segmentation are critical for cancer grade assessments, cellular and extracellular structures should primarily be extracted from histopathological images. In response, we sought to identify a useful cell segmentation approach with histopathological images that uses not only prominent deep learning algorithms (i.e., convolutional neural networks, stacked autoencoders, and deep belief networks), but also spatial relationships, information of which is critical for achieving better cell segmentation results. To that end, we collected cellular and extracellular samples from histopathological images by windowing in small patches with various sizes. In experiments, the segmentation accuracies of the methods used improved as the window sizes increased due to the addition of local spatial and contextual information. Once we compared the effects of training sample size and influence of window size, results revealed that the deep learning algorithms, especially convolutional neural networks and partly stacked autoencoders, performed better than conventional methods in cell segmentation.
Image segmentation with a novel regularized composite shape prior based on surrogate study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu
Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulatedmore » in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.« less
Liao, Xiaolei; Zhao, Juanjuan; Jiao, Cheng; Lei, Lei; Qiang, Yan; Cui, Qiang
2016-01-01
Background Lung parenchyma segmentation is often performed as an important pre-processing step in the computer-aided diagnosis of lung nodules based on CT image sequences. However, existing lung parenchyma image segmentation methods cannot fully segment all lung parenchyma images and have a slow processing speed, particularly for images in the top and bottom of the lung and the images that contain lung nodules. Method Our proposed method first uses the position of the lung parenchyma image features to obtain lung parenchyma ROI image sequences. A gradient and sequential linear iterative clustering algorithm (GSLIC) for sequence image segmentation is then proposed to segment the ROI image sequences and obtain superpixel samples. The SGNF, which is optimized by a genetic algorithm (GA), is then utilized for superpixel clustering. Finally, the grey and geometric features of the superpixel samples are used to identify and segment all of the lung parenchyma image sequences. Results Our proposed method achieves higher segmentation precision and greater accuracy in less time. It has an average processing time of 42.21 seconds for each dataset and an average volume pixel overlap ratio of 92.22 ± 4.02% for four types of lung parenchyma image sequences. PMID:27532214
Mammogram segmentation using maximal cell strength updation in cellular automata.
Anitha, J; Peter, J Dinesh
2015-08-01
Breast cancer is the most frequently diagnosed type of cancer among women. Mammogram is one of the most effective tools for early detection of the breast cancer. Various computer-aided systems have been introduced to detect the breast cancer from mammogram images. In a computer-aided diagnosis system, detection and segmentation of breast masses from the background tissues is an important issue. In this paper, an automatic segmentation method is proposed to identify and segment the suspicious mass regions of mammogram using a modified transition rule named maximal cell strength updation in cellular automata (CA). In coarse-level segmentation, the proposed method performs an adaptive global thresholding based on the histogram peak analysis to obtain the rough region of interest. An automatic seed point selection is proposed using gray-level co-occurrence matrix-based sum average feature in the coarse segmented image. Finally, the method utilizes CA with the identified initial seed point and the modified transition rule to segment the mass region. The proposed approach is evaluated over the dataset of 70 mammograms with mass from mini-MIAS database. Experimental results show that the proposed approach yields promising results to segment the mass region in the mammograms with the sensitivity of 92.25% and accuracy of 93.48%.
A Modular Hierarchical Approach to 3D Electron Microscopy Image Segmentation
Liu, Ting; Jones, Cory; Seyedhosseini, Mojtaba; Tasdizen, Tolga
2014-01-01
The study of neural circuit reconstruction, i.e., connectomics, is a challenging problem in neuroscience. Automated and semi-automated electron microscopy (EM) image analysis can be tremendously helpful for connectomics research. In this paper, we propose a fully automatic approach for intra-section segmentation and inter-section reconstruction of neurons using EM images. A hierarchical merge tree structure is built to represent multiple region hypotheses and supervised classification techniques are used to evaluate their potentials, based on which we resolve the merge tree with consistency constraints to acquire final intra-section segmentation. Then, we use a supervised learning based linking procedure for the inter-section neuron reconstruction. Also, we develop a semi-automatic method that utilizes the intermediate outputs of our automatic algorithm and achieves intra-segmentation with minimal user intervention. The experimental results show that our automatic method can achieve close-to-human intra-segmentation accuracy and state-of-the-art inter-section reconstruction accuracy. We also show that our semi-automatic method can further improve the intra-segmentation accuracy. PMID:24491638
A novel multiphoton microscopy images segmentation method based on superpixel and watershed.
Wu, Weilin; Lin, Jinyong; Wang, Shu; Li, Yan; Liu, Mingyu; Liu, Gaoqiang; Cai, Jianyong; Chen, Guannan; Chen, Rong
2017-04-01
Multiphoton microscopy (MPM) imaging technique based on two-photon excited fluorescence (TPEF) and second harmonic generation (SHG) shows fantastic performance for biological imaging. The automatic segmentation of cellular architectural properties for biomedical diagnosis based on MPM images is still a challenging issue. A novel multiphoton microscopy images segmentation method based on superpixels and watershed (MSW) is presented here to provide good segmentation results for MPM images. The proposed method uses SLIC superpixels instead of pixels to analyze MPM images for the first time. The superpixels segmentation based on a new distance metric combined with spatial, CIE Lab color space and phase congruency features, divides the images into patches which keep the details of the cell boundaries. Then the superpixels are used to reconstruct new images by defining an average value of superpixels as image pixels intensity level. Finally, the marker-controlled watershed is utilized to segment the cell boundaries from the reconstructed images. Experimental results show that cellular boundaries can be extracted from MPM images by MSW with higher accuracy and robustness. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
The importance of having an appropriate relational data segmentation in ATLAS
NASA Astrophysics Data System (ADS)
Dimitrov, G.
2015-12-01
In this paper we describe specific technical solutions put in place in various database applications of the ATLAS experiment at LHC where we make use of several partitioning techniques available in Oracle 11g. With the broadly used range partitioning and its option of automatic interval partitioning we add our own logic in PLSQL procedures and scheduler jobs to sustain data sliding windows in order to enforce various data retention policies. We also make use of the new Oracle 11g reference partitioning in the Nightly Build System to achieve uniform data segmentation. However the most challenging issue was to segment the data of the new ATLAS Distributed Data Management system (Rucio), which resulted in tens of thousands list type partitions and sub-partitions. Partition and sub-partition management, index strategy, statistics gathering and queries execution plan stability are important factors when choosing an appropriate physical model for the application data management. The so-far accumulated knowledge and analysis on the new Oracle 12c version features that could be beneficial will be shared with the audience.
Isaksen, Jonas; Leber, Remo; Schmid, Ramun; Schmid, Hans-Jakob; Generali, Gianluca; Abächerli, Roger
2017-02-01
The first-order high-pass filter (AC coupling) has previously been shown to affect the ECG for higher cut-off frequencies. We seek to find a systematic deviation in computer measurements of the electrocardiogram when the AC coupling with a 0.05 Hz first-order high-pass filter is used. The standard 12-lead electrocardiogram from 1248 patients and the automated measurements of their DC and AC coupled version were used. We expect a large unipolar QRS-complex to produce a deviation in the opposite direction in the ST-segment. We found a strong correlation between the QRS integral and the offset throughout the ST-segment. The coefficient for J amplitude deviation was found to be -0.277 µV/(µV⋅s). Potential dangerous alterations to the diagnostically important ST-segment were found. Medical professionals and software developers for electrocardiogram interpretation programs should be aware of such high-pass filter effects since they could be misinterpreted as pathophysiology or some pathophysiology could be masked by these effects. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
An, Gao; Hong, Li; Zhou, Xiao-Bing; Yang, Qiong; Li, Mei-Qing; Tang, Xiang-Yang
2017-03-01
We investigated and compared the functionality of two 3D visualization software provided by a CT vendor and a third-party vendor, respectively. Using surgical anatomical measurement as baseline, we evaluated the accuracy of 3D visualization and verified their utility in computer-aided anatomical analysis. The study cohort consisted of 50 adult cadavers fixed with the classical formaldehyde method. The computer-aided anatomical analysis was based on CT images (in DICOM format) acquired by helical scan with contrast enhancement, using a CT vendor provided 3D visualization workstation (Syngo) and a third-party 3D visualization software (Mimics) that was installed on a PC. Automated and semi-automated segmentations were utilized in the 3D visualization workstation and software, respectively. The functionality and efficiency of automated and semi-automated segmentation methods were compared. Using surgical anatomical measurement as a baseline, the accuracy of 3D visualization based on automated and semi-automated segmentations was quantitatively compared. In semi-automated segmentation, the Mimics 3D visualization software outperformed the Syngo 3D visualization workstation. No significant difference was observed in anatomical data measurement by the Syngo 3D visualization workstation and the Mimics 3D visualization software (P>0.05). Both the Syngo 3D visualization workstation provided by a CT vendor and the Mimics 3D visualization software by a third-party vendor possessed the needed functionality, efficiency and accuracy for computer-aided anatomical analysis. Copyright © 2016 Elsevier GmbH. All rights reserved.
Morelli, John; Porter, David; Ai, Fei; Gerdes, Clint; Saettele, Megan; Feiweier, Thorsten; Padua, Abraham; Dix, James; Marra, Michael; Rangaswamy, Rajesh; Runge, Val
2013-04-01
Diffusion-weighted imaging (DWI) magnetic resonance imaging (MRI) is most commonly performed utilizing a single-shot echo-planar imaging technique (ss-EPI). Susceptibility artifact and image blur are severe when this sequence is utilized at 3 T. To evaluate a readout-segmented approach to DWI MR in comparison with single-shot echo planar imaging for brain MRI. Eleven healthy volunteers and 14 patients with acute and early subacute infarctions underwent DWI MR examinations at 1.5 and 3T with ss-EPI and readout-segmented echo-planar (rs-EPI) DWI at equal nominal spatial resolutions. Signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) calculations were made, and two blinded readers ranked the scans in terms of high signal intensity bulk susceptibility artifact, spatial distortions, image blur, overall preference, and motion artifact. SNR and CNR were greatest with rs-EPI (8.1 ± 0.2 SNR vs. 6.0 ± 0.2; P <10(-4) at 3T). Spatial distortions were greater with single-shot (0.23 ± 0.03 at 3T; P <0.001) than with rs-EPI (0.12 ± 0.02 at 3T). Combined with blur and artifact reduction, this resulted in a qualitative preference for the readout-segmented scans overall. Substantial image quality improvements are possible with readout-segmented vs. single-shot EPI - the current clinical standard for DWI - regardless of field strength (1.5 or 3 T). This results in improved image quality secondary to greater real spatial resolution and reduced artifacts from susceptibility in MR imaging of the brain.
Refinement of ground reference data with segmented image data
NASA Technical Reports Server (NTRS)
Robinson, Jon W.; Tilton, James C.
1991-01-01
One of the ways to determine ground reference data (GRD) for satellite remote sensing data is to photo-interpret low altitude aerial photographs and then digitize the cover types on a digitized tablet and register them to 7.5 minute U.S.G.S. maps (that were themselves digitized). The resulting GRD can be registered to the satellite image or, vice versa. Unfortunately, there are many opportunities for error when using digitizing tablet and the resolution of the edges for the GRD depends on the spacing of the points selected on the digitizing tablet. One of the consequences of this is that when overlaid on the image, errors and missed detail in the GRD become evident. An approach is discussed for correcting these errors and adding detail to the GRD through the use of a highly interactive, visually oriented process. This process involves the use of overlaid visual displays of the satellite image data, the GRD, and a segmentation of the satellite image data. Several prototype programs were implemented which provide means of taking a segmented image and using the edges from the reference data to mask out these segment edges that are beyond a certain distance from the reference data edges. Then using the reference data edges as a guide, those segment edges that remain and that are judged not to be image versions of the reference edges are manually marked and removed. The prototype programs that were developed and the algorithmic refinements that facilitate execution of this task are described.
NASA Astrophysics Data System (ADS)
Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.
2016-06-01
This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.
Machine learning in a graph framework for subcortical segmentation
NASA Astrophysics Data System (ADS)
Guo, Zhihui; Kashyap, Satyananda; Sonka, Milan; Oguz, Ipek
2017-02-01
Automated and reliable segmentation of subcortical structures from human brain magnetic resonance images is of great importance for volumetric and shape analyses in quantitative neuroimaging studies. However, poor boundary contrast and variable shape of these structures make the automated segmentation a tough task. We propose a 3D graph-based machine learning method, called LOGISMOS-RF, to segment the caudate and the putamen from brain MRI scans in a robust and accurate way. An atlas-based tissue classification and bias-field correction method is applied to the images to generate an initial segmentation for each structure. Then a 3D graph framework is utilized to construct a geometric graph for each initial segmentation. A locally trained random forest classifier is used to assign a cost to each graph node. The max-flow algorithm is applied to solve the segmentation problem. Evaluation was performed on a dataset of T1-weighted MRI's of 62 subjects, with 42 images used for training and 20 images for testing. For comparison, FreeSurfer, FSL and BRAINSCut approaches were also evaluated using the same dataset. Dice overlap coefficients and surface-to-surfaces distances between the automated segmentation and expert manual segmentations indicate the results of our method are statistically significantly more accurate than the three other methods, for both the caudate (Dice: 0.89 +/- 0.03) and the putamen (0.89 +/- 0.03).
Motion Estimation System Utilizing Point Cloud Registration
NASA Technical Reports Server (NTRS)
Chen, Qi (Inventor)
2016-01-01
A system and method of estimation motion of a machine is disclosed. The method may include determining a first point cloud and a second point cloud corresponding to an environment in a vicinity of the machine. The method may further include generating a first extended gaussian image (EGI) for the first point cloud and a second EGI for the second point cloud. The method may further include determining a first EGI segment based on the first EGI and a second EGI segment based on the second EGI. The method may further include determining a first two dimensional distribution for points in the first EGI segment and a second two dimensional distribution for points in the second EGI segment. The method may further include estimating motion of the machine based on the first and second two dimensional distributions.
NASA Technical Reports Server (NTRS)
Bebis, George (Inventor); Amayeh, Gholamreza (Inventor)
2015-01-01
Hand-based biometric analysis systems and techniques are described which provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an in put image. Additionally, the analysis utilizes re-use of commonly-seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.
An adaptive multi-feature segmentation model for infrared image
NASA Astrophysics Data System (ADS)
Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa
2016-04-01
Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.
A systematic review of definitions and classification systems of adjacent segment pathology.
Kraemer, Paul; Fehlings, Michael G; Hashimoto, Robin; Lee, Michael J; Anderson, Paul A; Chapman, Jens R; Raich, Annie; Norvell, Daniel C
2012-10-15
Systematic review. To undertake a systematic review to determine how "adjacent segment degeneration," "adjacent segment disease," or clinical pathological processes that serve as surrogates for adjacent segment pathology are classified and defined in the peer-reviewed literature. Adjacent segment degeneration and adjacent segment disease are terms referring to degenerative changes known to occur after reconstructive spine surgery, most commonly at an immediately adjacent functional spinal unit. These can include disc degeneration, instability, spinal stenosis, facet degeneration, and deformity. The true incidence and clinical impact of degenerative changes at the adjacent segment is unclear because there is lack of a universally accepted classification system that rigorously addresses clinical and radiological issues. A systematic review of the English language literature was undertaken and articles were classified using the Grades of Recommendation Assessment, Development, and Evaluation criteria. RESULTS.: Seven classification systems of spinal degeneration, including degeneration at the adjacent segment, were identified. None have been evaluated for reliability or validity specific to patients with degeneration at the adjacent segment. The ways in which terms related to adjacent segment "degeneration" or "disease" are defined in the peer-reviewed literature are highly variable. On the basis of the systematic review presented in this article, no formal classification system for either cervical or thoracolumbar adjacent segment disorders currently exists. No recommendations regarding the use of current classification of degeneration at any segments can be made based on the available literature. A new comprehensive definition for adjacent segment pathology (ASP, the now preferred terminology) has been proposed in this Focus Issue, which reflects the diverse pathology observed at functional spinal units adjacent to previous spinal reconstruction and balances detailed stratification with clinical utility. A comprehensive classification system is being developed through expert opinion and will require validation as well as peer review. Strength of Statement: Strong.
Preferences for lamb meat: a choice experiment for Spanish consumers.
Gracia, Azucena; de-Magistris, Tiziana
2013-10-01
This paper analyzes consumers' preferences for different lamb meat attributes using a choice experiment. In particular, preferences for the type of commercial lamb meat ("Ternasco" and "Suckling") and the origin of production (locally produced "Ojinegra from Teruel") were evaluated. Moreover, we endogenously identify consumers' segments based on consumers' preferences for the analyzed attributes. Data come from a survey administrated in Spain during 2009. A latent class model was used to estimate the effect of the attributes on consumer utility, derive the willingness to pay and determine consumers' segments. Results suggest that consumers' preferences for both attributes are heterogeneous and two homogenous consumers' segments were detected. The largest segment (79%) did not value any of the analyzed attributes while the smaller one (21%) valued both of them positively. In particular, consumers in this second segment are willing to pay an extra premium for the "Ternasco" lamb meat, around double the premium they are willing to pay for the locally produced lamb meat "Ojinegra from Teruel". Copyright © 2013 Elsevier Ltd. All rights reserved.
Lung lobe segmentation based on statistical atlas and graph cuts
NASA Astrophysics Data System (ADS)
Nimura, Yukitaka; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku
2012-03-01
This paper presents a novel method that can extract lung lobes by utilizing probability atlas and multilabel graph cuts. Information about pulmonary structures plays very important role for decision of the treatment strategy and surgical planning. The human lungs are divided into five anatomical regions, the lung lobes. Precise segmentation and recognition of lung lobes are indispensable tasks in computer aided diagnosis systems and computer aided surgery systems. A lot of methods for lung lobe segmentation are proposed. However, these methods only target the normal cases. Therefore, these methods cannot extract the lung lobes in abnormal cases, such as COPD cases. To extract lung lobes in abnormal cases, this paper propose a lung lobe segmentation method based on probability atlas of lobe location and multilabel graph cuts. The process consists of three components; normalization based on the patient's physique, probability atlas generation, and segmentation based on graph cuts. We apply this method to six cases of chest CT images including COPD cases. Jaccard index was 79.1%.
Dual-circuit segmented rail phased induction motor
Marder, Barry M.; Cowan, Jr., Maynard
2002-01-01
An improved linear motor utilizes two circuits, rather that one circuit and an opposed plate, to gain efficiency. The powered circuit is a flat conductive coil. The opposed segmented rail circuit is either a plurality of similar conductive coils that are shorted, or a plurality of ladders formed of opposed conductive bars connected by a plurality of spaced conductors. In each embodiment, the conductors are preferably cables formed from a plurality of intertwined insulated wires to carry current evenly.
Image Processing of Porous Silicon Microarray in Refractive Index Change Detection.
Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi
2017-06-08
A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image.
Image Processing of Porous Silicon Microarray in Refractive Index Change Detection
Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi
2017-01-01
A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image. PMID:28594383
Whole vertebral bone segmentation method with a statistical intensity-shape model based approach
NASA Astrophysics Data System (ADS)
Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer
2011-03-01
An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.
effects, and the characterization of performance and durability effects induced by coating inhomogeneities . Porter, G. Bender, "Utilizing a segmented fuel cell to study the effects of electrode coating
Second quantization in bit-string physics
NASA Technical Reports Server (NTRS)
Noyes, H. Pierre
1993-01-01
Using a new fundamental theory based on bit-strings, a finite and discrete version of the solutions of the free one particle Dirac equation as segmented trajectories with steps of length h/mc along the forward and backward light cones executed at velocity +/- c are derived. Interpreting the statistical fluctuations which cause the bends in these segmented trajectories as emission and absorption of radiation, these solutions are analogous to a fermion propagator in a second quantized theory. This allows us to interpret the mass parameter in the step length as the physical mass of the free particle. The radiation in interaction with it has the usual harmonic oscillator structure of a second quantized theory. How these free particle masses can be generated gravitationally using the combinatorial hierarchy sequence (3,10,137,2(sup 127) + 136), and some of the predictive consequences are sketched.
Brain tumor image segmentation using kernel dictionary learning.
Jeon Lee; Seung-Jun Kim; Rong Chen; Herskovits, Edward H
2015-08-01
Automated brain tumor image segmentation with high accuracy and reproducibility holds a big potential to enhance the current clinical practice. Dictionary learning (DL) techniques have been applied successfully to various image processing tasks recently. In this work, kernel extensions of the DL approach are adopted. Both reconstructive and discriminative versions of the kernel DL technique are considered, which can efficiently incorporate multi-modal nonlinear feature mappings based on the kernel trick. Our novel discriminative kernel DL formulation allows joint learning of a task-driven kernel-based dictionary and a linear classifier using a K-SVD-type algorithm. The proposed approaches were tested using real brain magnetic resonance (MR) images of patients with high-grade glioma. The obtained preliminary performances are competitive with the state of the art. The discriminative kernel DL approach is seen to reduce computational burden without much sacrifice in performance.
Midmarket Solar Policies in the United States | Solar Research | NREL
non-residential and non-utility segments. To help prospective solar customers understand and use the installations by sector for three sectors: residential, non-residential, and utility. The non-residential curve year on the chart, the non-residential curve shows the lowest annual rate of PV installations. U.S
NRC/AMRMC Resident Research Associateship Program
2015-03-01
antimicrobials (and antiseptics) as well as to evaluate the effectiveness of various biofilm dispersal agents utilizing a number of bacterial species as well...combat related wounds. 3 Demonstrated the utility of combinations of biofilm dispersal agents and antimicrobials as an alternate therapy for targeting...alone or in combination with antimicrobials ) to reduce infection in contaminated femoral segmental defects. 5 Characterized host responses of
ERIC Educational Resources Information Center
Merikangas, Kathleen Ries; He, Jian-ping; Burstein, Marcy; Swendsen, Joel; Avenevoli, Shelli; Case, Brady; Georgiades, Katholiki; Heaton, Leanne; Swanson, Sonja; Olfson, Mark
2011-01-01
Objective: Mental health policy for youth has been constrained by a paucity of nationally representative data concerning patterns and correlates of mental health service utilization in this segment of the population. The objectives of this investigation were to examine the rates and sociodemographic correlates of lifetime mental health service use…
NASA Technical Reports Server (NTRS)
Osgood, Cathy; Williams, Kevin; Gentry, Philip; Brownfield, Dana; Hallstrom, John; Stuit, Tim
2012-01-01
Orbit Software Suite is used to support a variety of NASA/DM (Dependable Multiprocessor) mission planning and analysis activities on the IPS (Intrusion Prevention System) platform. The suite of Orbit software tools (Orbit Design and Orbit Dynamics) resides on IPS/Linux workstations, and is used to perform mission design and analysis tasks corresponding to trajectory/ launch window, rendezvous, and proximity operations flight segments. A list of tools in Orbit Software Suite represents tool versions established during/after the Equipment Rehost-3 Project.
Rocket Motor Microphone Investigation
NASA Technical Reports Server (NTRS)
Pilkey, Debbie; Herrera, Eric; Gee, Kent L.; Giraud, Jerom H.; Young, Devin J.
2010-01-01
At ATK's facility in Utah, large full-scale solid rocket motors are tested. The largest is a five-segment version of the reusable solid rocket motor, which is for use on the Ares I launch vehicle. As a continuous improvement project, ATK and BYU investigated the use of microphones on these static tests, the vibration and temperature to which the instruments are subjected, and in particular the use of vent tubes and the effects these vents have at low frequencies.
Astronaut Heidemarie M. Stefanyshyn-Piper During STS-115 Training
NASA Technical Reports Server (NTRS)
2002-01-01
Attired in a training version of the Extravehicular Mobility Unit (EMU) space suit, STS-115 astronaut and mission specialist, Heidemarie M. Stefanyshyn-Piper, is about to begin a training session in the Neutral Buoyancy Laboratory (NBL) near Johnson Space Center in preparation for the STS-115 mission. Launched on September 9, 2006, the STS-115 mission continued assembly of the International Space Station (ISS) with the installation of the truss segments P3 and P4.
Astronaut Heidemarie M. Stefanyshyn-Piper During STS-115 Training
NASA Technical Reports Server (NTRS)
2002-01-01
Attired in a training version of the Extravehicular Mobility Unit (EMU) space suit, STS-115 astronaut and mission specialist, Heidemarie M. Stefanyshyn-Piper, is submerged into the waters of the Neutral Buoyancy Laboratory (NBL) near Johnson Space Center for training in preparation for the STS-115 mission. Launched on September 9, 2006, the STS-115 mission continued assembly of the International Space Station (ISS) with the installation of the truss segments P3 and P4.
Navigation/Prop Software Suite
NASA Technical Reports Server (NTRS)
Bruchmiller, Tomas; Tran, Sanh; Lee, Mathew; Bucker, Scott; Bupane, Catherine; Bennett, Charles; Cantu, Sergio; Kwong, Ping; Propst, Carolyn
2012-01-01
Navigation (Nav)/Prop software is used to support shuttle mission analysis, production, and some operations tasks. The Nav/Prop suite containing configuration items (CIs) resides on IPS/Linux workstations. It features lifecycle documents, and data files used for shuttle navigation and propellant analysis for all flight segments. This suite also includes trajectory server, archive server, and RAT software residing on MCC/Linux workstations. Navigation/Prop represents tool versions established during or after IPS Equipment Rehost-3 or after the MCC Rehost.
Bar-Yosef, Omer; Rotman, Yaron; Nelken, Israel
2002-10-01
The responses of neurons to natural sounds and simplified natural sounds were recorded in the primary auditory cortex (AI) of halothane-anesthetized cats. Bird chirps were used as the base natural stimuli. They were first presented within the original acoustic context (at least 250 msec of sounds before and after each chirp). The first simplification step consisted of extracting a short segment containing just the chirp from the longer segment. For the second step, the chirp was cleaned of its accompanying background noise. Finally, each chirp was replaced by an artificial version that had approximately the same frequency trajectory but with constant amplitude. Neurons had a wide range of different response patterns to these stimuli, and many neurons had late response components in addition, or instead of, their onset responses. In general, every simplification step had a substantial influence on the responses. Neither the extracted chirp nor the clean chirp evoked a similar response to the chirp presented within its acoustic context. The extracted chirp evoked different responses than its clean version. The artificial chirps evoked stronger responses with a shorter latency than the corresponding clean chirp because of envelope differences. These results illustrate the sensitivity of neurons in AI to small perturbations of their acoustic input. In particular, they pose a challenge to models based on linear summation of energy within a spectrotemporal receptive field.
Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations
Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J.
2017-01-01
House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4–12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a ‘gold standard’ reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community. PMID:28727808
Kan, S L; Yang, B; Ning, G Z; Gao, S J; Sun, J C; Feng, S Q
2016-12-01
Objective: To compare the benefits and harms of cervical disc arthroplasty (CDA) with anterior cervical discectomy and fusion(ACDF) for symptomatic cervical disc disease at mid- to long-term follow-up. Methods: Electronic searches were made in PubMed, EMBASE, and the Cochrane Library for randomized controlled trials with at least 48 moths follow-up.Outcomes were reported as relative risk or standardized mean difference.Meta-analysis was carried out using Revman version 5.3 and Stata version 12.0. Results: Seven trials were included, involving 2 302 participants.The results of this meta-analysis indicated that CDA brought about fewer secondary surgical procedures, lower neck disability index (NDI) scores, lower neck and arm pain scores, greater SF-36 Physical Component Summary (PCS) and Mental Component Summary(MCS) scores, greater range of motion (ROM) at the operative level and less superior adjacent-segment degeneration( P <0.05) than ACDF.CDA was not statistically different from ACDF in inferior adjacent-segment degeneration, neurological success, and adverse events ( P >0.05). Conclusions: CDA can significantly reduce the rates of secondary surgical procedures compared with ACDF.Meanwhile, CDA is superior or equivalent to ACDF in other aspects.As some studies without double-blind are included and some potential biases exites, more randomized controlled trials with high quality are required to get more reliable conclusions.
Phased Array Mirror Extendible Large Aperture (PAMELA) Optics Adjustment
NASA Technical Reports Server (NTRS)
1995-01-01
Scientists at Marshall's Adaptive Optics Lab demonstrate the Wave Front Sensor alignment using the Phased Array Mirror Extendible Large Aperture (PAMELA) optics adjustment. The primary objective of the PAMELA project is to develop methods for aligning and controlling adaptive optics segmented mirror systems. These systems can be used to acquire or project light energy. The Next Generation Space Telescope is an example of an energy acquisition system that will employ segmented mirrors. Light projection systems can also be used for power beaming and orbital debris removal. All segmented optical systems must be adjusted to provide maximum performance. PAMELA is an on going project that NASA is utilizing to investigate various methods for maximizing system performance.
Off-lexicon online Arabic handwriting recognition using neural network
NASA Astrophysics Data System (ADS)
Yahia, Hamdi; Chaabouni, Aymen; Boubaker, Houcine; Alimi, Adel M.
2017-03-01
This paper highlights a new method for online Arabic handwriting recognition based on graphemes segmentation. The main contribution of our work is to explore the utility of Beta-elliptic model in segmentation and features extraction for online handwriting recognition. Indeed, our method consists in decomposing the input signal into continuous part called graphemes based on Beta-Elliptical model, and classify them according to their position in the pseudo-word. The segmented graphemes are then described by the combination of geometric features and trajectory shape modeling. The efficiency of the considered features has been evaluated using feed forward neural network classifier. Experimental results using the benchmarking ADAB Database show the performance of the proposed method.
Wen, Jessica; Desai, Naman S; Jeffery, Dean; Aygun, Nafi; Blitz, Ari
2018-02-01
High-resolution isotropic 3-dimensional (D) MR imaging with and without contrast is now routinely used for imaging evaluation of cranial nerve anatomy and pathologic conditions. The anatomic details of the extraforaminal segments are well-visualized on these techniques. A wide range of pathologic entities may cause enhancement or displacement of the nerve, which is now visible to an extent not available on standard 2D imaging. This article highlights the anatomy of extraforaminal segments of the cranial nerves and uses select cases to illustrate the utility and power of these sequences, with a focus on constructive interference in steady-state. Copyright © 2017 Elsevier Inc. All rights reserved.
Segmentation of radiographic images under topological constraints: application to the femur.
Gamage, Pavan; Xie, Sheng Quan; Delmas, Patrice; Xu, Wei Liang
2010-09-01
A framework for radiographic image segmentation under topological control based on two-dimensional (2D) image analysis was developed. The system is intended for use in common radiological tasks including fracture treatment analysis, osteoarthritis diagnostics and osteotomy management planning. The segmentation framework utilizes a generic three-dimensional (3D) model of the bone of interest to define the anatomical topology. Non-rigid registration is performed between the projected contours of the generic 3D model and extracted edges of the X-ray image to achieve the segmentation. For fractured bones, the segmentation requires an additional step where a region-based active contours curve evolution is performed with a level set Mumford-Shah method to obtain the fracture surface edge. The application of the segmentation framework to analysis of human femur radiographs was evaluated. The proposed system has two major innovations. First, definition of the topological constraints does not require a statistical learning process, so the method is generally applicable to a variety of bony anatomy segmentation problems. Second, the methodology is able to handle both intact and fractured bone segmentation. Testing on clinical X-ray images yielded an average root mean squared distance (between the automatically segmented femur contour and the manual segmented ground truth) of 1.10 mm with a standard deviation of 0.13 mm. The proposed point correspondence estimation algorithm was benchmarked against three state-of-the-art point matching algorithms, demonstrating successful non-rigid registration for the cases of interest. A topologically constrained automatic bone contour segmentation framework was developed and tested, providing robustness to noise, outliers, deformations and occlusions.
2nd-Order CESE Results For C1.4: Vortex Transport by Uniform Flow
NASA Technical Reports Server (NTRS)
Friedlander, David J.
2015-01-01
The Conservation Element and Solution Element (CESE) method was used as implemented in the NASA research code ez4d. The CESE method is a time accurate formulation with flux-conservation in both space and time. The method treats the discretized derivatives of space and time identically and while the 2nd-order accurate version was used, high-order versions exist, the 2nd-order accurate version was used. In regards to the ez4d code, it is an unstructured Navier-Stokes solver coded in C++ with serial and parallel versions available. As part of its architecture, ez4d has the capability to utilize multi-thread and Messaging Passage Interface (MPI) for parallel runs.
The seasonal-cycle climate model
NASA Technical Reports Server (NTRS)
Marx, L.; Randall, D. A.
1981-01-01
The seasonal cycle run which will become the control run for the comparison with runs utilizing codes and parameterizations developed by outside investigators is discussed. The climate model currently exists in two parallel versions: one running on the Amdahl and the other running on the CYBER 203. These two versions are as nearly identical as machine capability and the requirement for high speed performance will allow. Developmental changes are made on the Amdahl/CMS version for ease of testing and rapidity of turnaround. The changes are subsequently incorporated into the CYBER 203 version using vectorization techniques where speed improvement can be realized. The 400 day seasonal cycle run serves as a control run for both medium and long range climate forecasts alsensitivity studies.
Development of an hp-version finite element method for computational optimal control
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Warner, Michael S.
1993-01-01
The purpose of this research effort is to develop a means to use, and to ultimately implement, hp-version finite elements in the numerical solution of optimal control problems. The hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element.
PC Utilities: Small Programs with a Big Impact
ERIC Educational Resources Information Center
Baule, Steven
2004-01-01
The three utility commercial programs available on the Internet are like software packages purchased through a vendor or the Internet, shareware programs are developed by individuals and distributed via the Internet for a small fee to obtain the complete version of the product, and freeware programs are distributed via the Internet free of cost.…
RUPS: Research Utilizing Problem Solving. Administrators Version. Leader's Manual.
ERIC Educational Resources Information Center
Jung, Charles; And Others
This manual is to be used by leaders of RUPS (Research Utilizing Problem Solving) workshops for school or district administrators. The workshop's goal is for administrators to develop problem solving skills by using the RUPS simulation situations in a teamwork setting. Although workshop leaders should be familiar with the RUPS materials and…
RUPS: Research Utilizing Problem Solving. Classroom Version. Leader's Manual.
ERIC Educational Resources Information Center
Jung, Charles; And Others
This training manual is for teachers participating in the Research Utilizing Problem Solving (RUPS) workshops. The workshops last for four and one-half days and are designed to improve the school setting and to increase teamwork skills. The teachers participate in simulation exercises in which they help a fictitious teacher or principal solve a…
RUPS: Research Utilizing Problem Solving. Administrators Version. Participant Materials.
ERIC Educational Resources Information Center
Jung, Charles; And Others
These materials are the handouts for school administrators participating in RUPS (Research Utilizing Problem Solving) workshops. The purposes of the workshops are to develop skills for improving schools and to increase teamwork skills. The handouts correspond to the 16 subsets that make up the five-day workshop: (1) orientation; (2) identifying…
Superconducting Rebalance Accelerometer
NASA Technical Reports Server (NTRS)
Torti, R. P.; Gerver, M.; Leary, K. J.; Jagannathan, S.; Dozer, D. M.
1996-01-01
A multi-axis accelerometer which utilizes a magnetically-suspended, high-TC proof mass is under development. The design and performance of a single axis device which is stabilized actively in the axial direction but which utilizes ring magnets for passive radial stabilization is discussed. The design of a full six degree-of-freedom device version is also described.
Geremew, Kumlachew; Gedefaw, Molla; Dagnew, Zewdu; Jara, Dube
2014-01-01
Background. Traditional biomass has been the major source of cooking energy for major segment of Ethiopian population for thousands of years. Cognizant of this energy poverty, the Government of Ethiopia has been spending huge sum of money to increase hydroelectric power generating stations. Objective. To assess current levels and correlates of traditional cooking energy sources utilization. Methods. A community based cross-sectional study was conducted employing both quantitative and qualitative approaches on systematically selected 423 households for quantitative and purposively selected 20 people for qualitative parts. SPSS version 16 for windows was used to analyze the quantitative data. Logistic regression was fitted to assess possible associations and its strength was measured using odds ratio at 95% CI. Qualitative data were analyzed thematically. Result. The study indicated that 95% of households still use traditional biomass for cooking. Those who were less knowledgeable about negative health and environmental effects of traditional cooking energy sources were seven and six times more likely to utilize them compared with those who were knowledgeable (AOR (95% CI) = 7.56 (1.635, 34.926), AOR (95% CI) = 6.68 (1.80, 24.385), resp.). The most outstanding finding of this study was that people use traditional energy for cooking mainly due to lack of the knowledge and their beliefs about food prepared using traditional energy. That means “…people still believe that food cooked with charcoal is believed to taste delicious than cooked with other means.” Conclusion. The majority of households use traditional biomass for cooking due to lack of knowledge and belief. Therefore, mechanisms should be designed to promote electric energy and to teach the public about health effects of traditional cooking energy source. PMID:24895591
NASA Tech Briefs, September 2006
NASA Technical Reports Server (NTRS)
2006-01-01
Topics covered include: Improving Thermomechanical Properties of SiC/SiC Composites; Aerogel/Particle Composites for Thermoelectric Devices; Patches for Repairing Ceramics and Ceramic- Matrix Composites; Lower-Conductivity Ceramic Materials for Thermal-Barrier Coatings; An Alternative for Emergency Preemption of Traffic Lights; Vehicle Transponder for Preemption of Traffic Lights; Automated Announcements of Approaching Emergency Vehicles; Intersection Monitor for Traffic-Light-Preemption System; Full-Duplex Digital Communication on a Single Laser Beam; Stabilizing Microwave Frequency of a Photonic Oscillator; Microwave Oscillators Based on Nonlinear WGM Resonators; Pointing Reference Scheme for Free-Space Optical Communications Systems; High-Level Performance Modeling of SAR Systems; Spectral Analysis Tool 6.2 for Windows; Multi-Platform Avionics Simulator; Silicon-Based Optical Modulator with Ferroelectric Layer; Multiplexing Transducers Based on Tunnel-Diode Oscillators; Scheduling with Automated Resolution of Conflicts; Symbolic Constraint Maintenance Grid; Discerning Trends in Performance Across Multiple Events; Magnetic Field Solver; Computing for Aiming a Spaceborne Bistatic- Radar Transmitter; 4-Vinyl-1,3-Dioxolane-2-One as an Additive for Li-Ion Cells; Probabilistic Prediction of Lifetimes of Ceramic Parts; STRANAL-PMC Version 2.0; Micromechanics and Piezo Enhancements of HyperSizer; Single-Phase Rare-Earth Oxide/Aluminum Oxide Glasses; Tilt/Tip/Piston Manipulator with Base-Mounted Actuators; Measurement of Model Noise in a Hard-Wall Wind Tunnel; Loci-STREAM Version 0.9; The Synergistic Engineering Environment; Reconfigurable Software for Controlling Formation Flying; More About the Tetrahedral Unstructured Software System; Computing Flows Using Chimera and Unstructured Grids; Avoiding Obstructions in Aiming a High-Gain Antenna; Analyzing Aeroelastic Stability of a Tilt-Rotor Aircraft; Tracking Positions and Attitudes of Mars Rovers; Stochastic Evolutionary Algorithms for Planning Robot Paths; Compressible Flow Toolbox; Rapid Aeroelastic Analysis of Blade Flutter in Turbomachines; General Flow-Solver Code for Turbomachinery Applications; Code for Multiblock CFD and Heat-Transfer Computations; Rotating-Pump Design Code; Covering a Crucible with Metal Containing Channels; Repairing Fractured Bones by Use of Bioabsorbable Composites; Kalman Filter for Calibrating a Telescope Focal Plane; Electronic Absolute Cartesian Autocollimator; Fiber-Optic Gratings for Lidar Measurements of Water Vapor; Simulating Responses of Gravitational-Wave Instrumentation; SOFTC: A Software Correlator for VLBI; Progress in Computational Simulation of Earthquakes; Database of Properties of Meteors; Computing Spacecraft Solar-Cell Damage by Charged Particles; Thermal Model of a Current-Carrying Wire in a Vacuum; Program for Analyzing Flows in a Complex Network; Program Predicts Performance of Optical Parametric Oscillators; Processing TES Level-1B Data; Automated Camera Calibration; Tracking the Martian CO2 Polar Ice Caps in Infrared Images; Processing TES Level-2 Data; SmaggIce Version 1.8; Solving the Swath Segment Selection Problem; The Spatial Standard Observer; Less-Complex Method of Classifying MPSK; Improvement in Recursive Hierarchical Segmentation of Data; Using Heaps in Recursive Hierarchical Segmentation of Data; Tool for Statistical Analysis and Display of Landing Sites; Automated Assignment of Proposals to Reviewers; Array-Pattern-Match Compiler for Opportunistic Data Analysis; Pre-Processor for Compression of Multispectral Image Data; Compressing Image Data While Limiting the Effects of Data Losses; Flight Operations Analysis Tool; Improvement in Visual Target Tracking for a Mobile Robot; Software for Simulating Air Traffic; Automated Vectorization of Decision-Based Algorithms; Grayscale Optical Correlator Workbench; "One-Stop Shopping" for Ocean Remote-Sensing and Model Data; State Analysis Database Tool; Generating CAHV and CAHVOmages with Shadows in ROAMS; Improving UDP/IP Transmission Without Increasing Congestion; FORTRAN Versions of Reformulated HFGMC Codes; Program for Editing Spacecraft Command Sequences; Flight-Tested Prototype of BEAM Software; Mission Scenario Development Workbench; Marsviewer; Tool for Analysis and Reduction of Scientific Data; ASPEN Version 3.0; Secure Display of Space-Exploration Images; Digital Front End for Wide-Band VLBI Science Receiver; Multifunctional Tanks for Spacecraft; Lightweight, Segmented, Mostly Silicon Telescope Mirror; Assistant for Analyzing Tropical-Rain-Mapping Radar Data; and Anion-Intercalating Cathodes for High-Energy- Density Cells.
Schatz, Philip; Moser, Rosemarie Scolaro; Solomon, Gary S.; Ott, Summer D.; Karpf, Robin
2012-01-01
Context: Limited data are available regarding the prevalence and nature of invalid computerized baseline neurocognitive test data. Objective: To identify the prevalence of invalid baselines on the desktop and online versions of ImPACT and to document the utility of correcting for left-right (L-R) confusion on the desktop version of ImPACT. Design: Cross-sectional study of independent samples of high school (HS) and collegiate athletes who completed the desktop or online versions of ImPACT. Participants or Other Participants: A total of 3769 HS (desktop = 1617, online = 2152) and 2130 collegiate (desktop = 742, online = 1388) athletes completed preseason baseline assessments. Main Outcome Measure(s): Prevalence of 5 ImPACT validity indicators, with correction for L-R confusion (reversing left and right mouse-click responses) on the desktop version, by test version and group. Chi-square analyses were conducted for sex and attentional or learning disorders. Results: At least 1 invalid indicator was present on 11.9% (desktop) versus 6.3% (online) of the HS baselines and 10.2% (desktop) versus 4.1% (online) of collegiate baselines; correcting for L-R confusion (desktop) decreased this overall prevalence to 8.4% (HS) and 7.5% (collegiate). Online Impulse Control scores alone yielded 0.4% (HS) and 0.9% (collegiate) invalid baselines, compared with 9.0% (HS) and 5.4% (collegiate) on the desktop version; correcting for L-R confusion (desktop) decreased the prevalence of invalid Impulse Control scores to 5.4% (HS) and 2.6% (collegiate). Male athletes and HS athletes with attention deficit or learning disorders who took the online version were more likely to have at least 1 invalid indicator. Utility of additional invalidity indicators is reported. Conclusions: The online ImPACT version appeared to yield fewer invalid baseline results than did the desktop version. Identification of L-R confusion reduces the prevalence of invalid baselines (desktop only) and the potency of Impulse Control as a validity indicator. We advise test administrators to be vigilant in identifying invalid baseline results as part of routine concussion management and prevention programs. PMID:22892410
Lowekamp, Bradley C; Chen, David T; Ibáñez, Luis; Blezek, Daniel
2013-01-01
SimpleITK is a new interface to the Insight Segmentation and Registration Toolkit (ITK) designed to facilitate rapid prototyping, education and scientific activities via high level programming languages. ITK is a templated C++ library of image processing algorithms and frameworks for biomedical and other applications, and it was designed to be generic, flexible and extensible. Initially, ITK provided a direct wrapping interface to languages such as Python and Tcl through the WrapITK system. Unlike WrapITK, which exposed ITK's complex templated interface, SimpleITK was designed to provide an easy to use and simplified interface to ITK's algorithms. It includes procedural methods, hides ITK's demand driven pipeline, and provides a template-less layer. Also SimpleITK provides practical conveniences such as binary distribution packages and overloaded operators. Our user-friendly design goals dictated a departure from the direct interface wrapping approach of WrapITK, toward a new facade class structure that only exposes the required functionality, hiding ITK's extensive template use. Internally SimpleITK utilizes a manual description of each filter with code-generation and advanced C++ meta-programming to provide the higher-level interface, bringing the capabilities of ITK to a wider audience. SimpleITK is licensed as open source software library under the Apache License Version 2.0 and more information about downloading it can be found at http://www.simpleitk.org.
Glisson, Courtenay L; Altamar, Hernan O; Herrell, S Duke; Clark, Peter; Galloway, Robert L
2011-11-01
Image segmentation is integral to implementing intraoperative guidance for kidney tumor resection. Results seen in computed tomography (CT) data are affected by target organ physiology as well as by the segmentation algorithm used. This work studies variables involved in using level set methods found in the Insight Toolkit to segment kidneys from CT scans and applies the results to an image guidance setting. A composite algorithm drawing on the strengths of multiple level set approaches was built using the Insight Toolkit. This algorithm requires image contrast state and seed points to be identified as input, and functions independently thereafter, selecting and altering method and variable choice as needed. Semi-automatic results were compared to expert hand segmentation results directly and by the use of the resultant surfaces for registration of intraoperative data. Direct comparison using the Dice metric showed average agreement of 0.93 between semi-automatic and hand segmentation results. Use of the segmented surfaces in closest point registration of intraoperative laser range scan data yielded average closest point distances of approximately 1 mm. Application of both inverse registration transforms from the previous step to all hand segmented image space points revealed that the distance variability introduced by registering to the semi-automatically segmented surface versus the hand segmented surface was typically less than 3 mm both near the tumor target and at distal points, including subsurface points. Use of the algorithm shortened user interaction time and provided results which were comparable to the gold standard of hand segmentation. Further, the use of the algorithm's resultant surfaces in image registration provided comparable transformations to surfaces produced by hand segmentation. These data support the applicability and utility of such an algorithm as part of an image guidance workflow.
[Experience with the clinical use of the PKS-25 and KTs-28 suturing devices].
Kalinina, T V
1976-01-01
A study of the experience gained during many years of use in the surgical practice of a stitcher PKS-25 for establishing esophageal-intestinal anastomoses and of the KTs-28 apparatus for anastomosing the colon with superjacent segments of the large intestine proved their efficient performance. Their utilization makes it possible to reduce the percentage of lethal outcomes due to inadequacy of the anastomosis sutures following operations involving gastrectomy, resection of the cardia, esophagus and segments of the large intestine.
DIGITAL CARTOGRAPHY AIDS IN THE SOLUTION OF BOUNDARY DISPUTE.
Beck, Francis J.
1983-01-01
The boundary between the States of Ohio and Kentucky and Indiana and Kentucky has been in dispute for many years. A major breakthrough in this continuing dispute has been a recent agreement between the States to accept the boundary line as depicted on U. S. Geological Survey 7. 5-minute quadrangle maps. A new segment of the boundary line was established utilizing the shoreline depicted on the 1966 U. S. Army Corps of Engineers charts. Segments of the boundary were then digitized from the quadrangle maps.
Power spectral ensity of markov texture fields
NASA Technical Reports Server (NTRS)
Shanmugan, K. S.; Holtzman, J. C.
1984-01-01
Texture is an important image characteristic. A variety of spatial domain techniques were proposed for extracting and utilizing textural features for segmenting and classifying images. for the most part, these spatial domain techniques are ad hos in nature. A markov random field model for image texture is discussed. A frequency domain description of image texture is derived in terms of the power spectral density. This model is used for designing optimum frequency domain filters for enhancing, restoring and segmenting images based on their textural properties.
UNICOS Kernel Internals Application Development
NASA Technical Reports Server (NTRS)
Caredo, Nicholas; Craw, James M. (Technical Monitor)
1995-01-01
Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.
Badahdah, Abdallah; Le, Kien Trung
2016-06-01
Research has shown a connection between negative parenting practices and child conduct problems. One of the most commonly used measures to assess parenting practices is the Alabama parenting questionnaire (APQ). The current study aimed to culturally adapt and assess the psychometric properties of a short version of the APQ for use in Arabic cultures, using a sample of 251 Qatari parents of children ages 4-12. An exploratory factor analysis proposed a five-model solution that corresponds to the original proposed model in the full version of the APQ. The five constructs of the APQ correlated in the expected direction with the Conduct Problem Subscale from the Strength and Difficulties Questionnaire. This study provides support for the utility of the 15-item short version of the APQ in Arabic cultures. More studies are needed to validate the performance of the short version of APQ in clinical settings.
The Short Form 36 English and Chinese versions were equivalent in a multiethnic Asian population.
Tan, Maudrene L S; Wee, Hwee-Lin; Lee, Jeannette; Ma, Stefan; Heng, Derrick; Tai, E-Shyong; Thumboo, Julian
2013-07-01
The primary aim of this article was to evaluate measurement equivalence of the English and Chinese versions of the Short Form 36 version 2 (SF-36v2) and Short Form 6D (SF-6D). In this cross-sectional study, health-related quality of life (HRQoL) was measured from 4,973 ethnic Chinese subjects using the SF-36v2 questionnaire. Measurement equivalence of domain and utility scores for the English- and Chinese-language SF-36v2 and SF-6D were assessed by examining the score differences between the two languages using linear regression models, with and without adjustment for known determinants of HRQoL. Equivalence was achieved if the 90% confidence interval (CI) of the differences in scores, due to language, fell within a predefined equivalence margin. Compared with English-speaking Chinese, Chinese-speaking Chinese were significantly older (47.6 vs. 55.5 years). All SF-36v2 domains were equivalent after adjusting for known HRQoL. SF-6D utility/items had the 90% CI either fully or partially overlap their predefined equivalence margin. The English- and Chinese-language versions of the SF-36v2 and SF-6D demonstrated equivalence. Copyright © 2013 Elsevier Inc. All rights reserved.
Collister, Barbara; Stein, Glenda; Katz, Deborah; DeBruyn, Joan; Andrusiw, Linda; Cloutier, Sheila
2012-01-01
Increasing costs and budget reductions combined with increasing demand from our growing, aging population support the need to ensure that the scarce resources allocated to home care clients match client needs. This article details how Integrated Home Care for the Calgary Zone of Alberta Health Services considered ethical and economic principles and used data from the Resident Assessment Instrument for Home Care (RAI-HC) and case mix indices from the Resource Utilization Groups Version III for Home Care (RUG-III/HC) to formulate service guidelines. These explicit service guidelines formalize and support individual resource allocation decisions made by case managers and provide a consistent and transparent method of allocating limited resources.
Joint Segmentation and Deformable Registration of Brain Scans Guided by a Tumor Growth Model
Gooya, Ali; Pohl, Kilian M.; Bilello, Michel; Biros, George; Davatzikos, Christos
2011-01-01
This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR ) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth. PMID:21995070
Joint segmentation and deformable registration of brain scans guided by a tumor growth model.
Gooya, Ali; Pohl, Kilian M; Bilello, Michel; Biros, George; Davatzikos, Christos
2011-01-01
This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth.
Electric Power Distribution System Model Simplification Using Segment Substitution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
Improved brain tumor segmentation by utilizing tumor growth model in longitudinal brain MRI
NASA Astrophysics Data System (ADS)
Pei, Linmin; Reza, Syed M. S.; Li, Wei; Davatzikos, Christos; Iftekharuddin, Khan M.
2017-03-01
In this work, we propose a novel method to improve texture based tumor segmentation by fusing cell density patterns that are generated from tumor growth modeling. To model tumor growth, we solve the reaction-diffusion equation by using Lattice-Boltzmann method (LBM). Computational tumor growth modeling obtains the cell density distribution that potentially indicates the predicted tissue locations in the brain over time. The density patterns is then considered as novel features along with other texture (such as fractal, and multifractal Brownian motion (mBm)), and intensity features in MRI for improved brain tumor segmentation. We evaluate the proposed method with about one hundred longitudinal MRI scans from five patients obtained from public BRATS 2015 data set, validated by the ground truth. The result shows significant improvement of complete tumor segmentation using ANOVA analysis for five patients in longitudinal MR images.
Improved brain tumor segmentation by utilizing tumor growth model in longitudinal brain MRI.
Pei, Linmin; Reza, Syed M S; Li, Wei; Davatzikos, Christos; Iftekharuddin, Khan M
2017-02-11
In this work, we propose a novel method to improve texture based tumor segmentation by fusing cell density patterns that are generated from tumor growth modeling. In order to model tumor growth, we solve the reaction-diffusion equation by using Lattice-Boltzmann method (LBM). Computational tumor growth modeling obtains the cell density distribution that potentially indicates the predicted tissue locations in the brain over time. The density patterns is then considered as novel features along with other texture (such as fractal, and multifractal Brownian motion (mBm)), and intensity features in MRI for improved brain tumor segmentation. We evaluate the proposed method with about one hundred longitudinal MRI scans from five patients obtained from public BRATS 2015 data set, validated by the ground truth. The result shows significant improvement of complete tumor segmentation using ANOVA analysis for five patients in longitudinal MR images.
Crowd motion segmentation and behavior recognition fusing streak flow and collectiveness
NASA Astrophysics Data System (ADS)
Gao, Mingliang; Jiang, Jun; Shen, Jin; Zou, Guofeng; Fu, Guixia
2018-04-01
Crowd motion segmentation and crowd behavior recognition are two hot issues in computer vision. A number of methods have been proposed to tackle these two problems. Among the methods, flow dynamics is utilized to model the crowd motion, with little consideration of collective property. Moreover, the traditional crowd behavior recognition methods treat the local feature and dynamic feature separately and overlook the interconnection of topological and dynamical heterogeneity in complex crowd processes. A crowd motion segmentation method and a crowd behavior recognition method are proposed based on streak flow and crowd collectiveness. The streak flow is adopted to reveal the dynamical property of crowd motion, and the collectiveness is incorporated to reveal the structure property. Experimental results show that the proposed methods improve the crowd motion segmentation accuracy and the crowd recognition rates compared with the state-of-the-art methods.
Statistical Inference in Hidden Markov Models Using k-Segment Constraints
Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher
2016-01-01
Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674
Primary nerve grafting: A study of revascularization.
Chalfoun, Charbel; Scholz, Thomas; Cole, Matthew D; Steward, Earl; Vanderkam, Victoria; Evans, Gregory R D
2003-01-01
It was the purpose of this study to evaluate the revascularization of primary nerve repair and grafts using orthogonal polarization spectral (OPS) (Cytometrix, Inc.) imaging, a novel method for real-time evaluation of microcirculatory blood flow. Twenty male Sprague Dawley rats (250 g) were anesthetized with vaporized halothane and surgically prepared for common peroneal nerve resection. Group I animals (n = 10) underwent primary neurorraphy following transection, utilizing a microsurgical technique with 10-0 nylon suture. Group II (n = 10) animals had a 7-mm segment of nerve excised, reversed, and subsequently replaced as a nerve graft under similar techniques. All animals were evaluated using the OPS imaging system on three portions (proximal, transection site/graft, and distal) of the nerve following repair or grafting. Reevaluation of 5 animals randomly selected from each group using the OPS imaging system was again performed on days 14 and 28 following microsurgical repair/grafting. Values were determined by percent change in vascularity of the common peroneal nerve at 0 hr following surgery. Real-time evaluation of blood flow was utilized as an additional objective criterion. Percent vascularity in group I and II animals increased from baseline in all segments at day 14. By day 28, vascularity in nerves of group I rats decreased in all segments to values below baseline, with the exception of the transection site, which remained at a higher value than obtained directly after surgical repair. In group II animals, vascularity remained above baseline in all segments except the distal segment, which returned to vascularity levels similar to those at 0 hr. Further, occlusion of the vessels demonstrated in the graft and distal segments following initial transection appeared to be corrected. This study suggests that revascularization may occur via bidirectional inosculation with favored proximal vascular growth advancement. The use of real-time imaging offers a unique evaluation of tissues through emerging technologies. Copyright 2003 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Tsagaan, Baigalmaa; Abe, Keiichi; Goto, Masahiro; Yamamoto, Seiji; Terakawa, Susumu
2006-03-01
This paper presents a segmentation method of brain tissues from MR images, invented for our image-guided neurosurgery system under development. Our goal is to segment brain tissues for creating biomechanical model. The proposed segmentation method is based on 3-D region growing and outperforms conventional approaches by stepwise usage of intensity similarities between voxels in conjunction with edge information. Since the intensity and the edge information are complementary to each other in the region-based segmentation, we use them twice by performing a coarse-to-fine extraction. First, the edge information in an appropriate neighborhood of the voxel being considered is examined to constrain the region growing. The expanded region of the first extraction result is then used as the domain for the next processing. The intensity and the edge information of the current voxel only are utilized in the final extraction. Before segmentation, the intensity parameters of the brain tissues as well as partial volume effect are estimated by using expectation-maximization (EM) algorithm in order to provide an accurate data interpretation into the extraction. We tested the proposed method on T1-weighted MR images of brain and evaluated the segmentation effectiveness comparing the results with ground truths. Also, the generated meshes from the segmented brain volume by using mesh generating software are shown in this paper.
NASA Astrophysics Data System (ADS)
Cheng, Guanghui; Yang, Xiaofeng; Wu, Ning; Xu, Zhijian; Zhao, Hongfu; Wang, Yuefeng; Liu, Tian
2013-02-01
Xerostomia (dry mouth), resulting from radiation damage to the parotid glands, is one of the most common and distressing side effects of head-and-neck cancer radiotherapy. Recent MRI studies have demonstrated that the volume reduction of parotid glands is an important indicator for radiation damage and xerostomia. In the clinic, parotid-volume evaluation is exclusively based on physicians' manual contours. However, manual contouring is time-consuming and prone to inter-observer and intra-observer variability. Here, we report a fully automated multi-atlas-based registration method for parotid-gland delineation in 3D head-and-neck MR images. The multi-atlas segmentation utilizes a hybrid deformable image registration to map the target subject to multiple patients' images, applies the transformation to the corresponding segmented parotid glands, and subsequently uses the multiple patient-specific pairs (head-and-neck MR image and transformed parotid-gland mask) to train support vector machine (SVM) to reach consensus to segment the parotid gland of the target subject. This segmentation algorithm was tested with head-and-neck MRIs of 5 patients following radiotherapy for the nasopharyngeal cancer. The average parotid-gland volume overlapped 85% between the automatic segmentations and the physicians' manual contours. In conclusion, we have demonstrated the feasibility of an automatic multi-atlas based segmentation algorithm to segment parotid glands in head-and-neck MR images.
NASA's Nuclear Thermal Propulsion Project
NASA Technical Reports Server (NTRS)
Houts, Michael; Mitchell, Sonny; Kim, Tony; Borowski, Stanley; Power, Kevin; Scott, John; Belvin, Anthony; Clement, Steven
2015-01-01
Space fission power systems can provide a power rich environment anywhere in the solar system, independent of available sunlight. Space fission propulsion offers the potential for enabling rapid, affordable access to any point in the solar system. One type of space fission propulsion is Nuclear Thermal Propulsion (NTP). NTP systems operate by using a fission reactor to heat hydrogen to very high temperature (>2500 K) and expanding the hot hydrogen through a supersonic nozzle. First generation NTP systems are designed to have an Isp of approximately 900 s. The high Isp of NTP enables rapid crew transfer to destinations such as Mars, and can also help reduce mission cost, improve logistics (fewer launches), and provide other benefits. However, for NTP systems to be utilized they must be affordable and viable to develop. NASA's Advanced Exploration Systems (AES) NTP project is a technology development project that will help assess the affordability and viability of NTP. Early work has included fabrication of representative graphite composite fuel element segments, coating of representative graphite composite fuel element segments, fabrication of representative cermet fuel element segments, and testing of fuel element segments in the Compact Fuel Element Environmental Tester (CFEET). Near-term activities will include testing approximately 16" fuel element segments in the Nuclear Thermal Rocket Element Environmental Simulator (NTREES), and ongoing research into improving fuel microstructure and coatings. In addition to recapturing fuels technology, affordable development, qualification, and utilization strategies must be devised. Options such as using low-enriched uranium (LEU) instead of highly-enriched uranium (HEU) are being assessed, although that option requires development of a key technology before it can be applied to NTP in the thrust range of interest. Ground test facilities will be required, especially if NTP is to be used in conjunction with high value or crewed missions. There are potential options for either modifying existing facilities or constructing new ground test facilities. At least three potential options exist for reducing (or eliminating) the release of radioactivity into the environment during ground testing. These include fully containing the NTP exhaust during the ground test, scrubbing the exhaust, or utilizing an existing borehole at the Nevada National Security Site (NNSS) to filter the exhaust. Finally, the project is considering the potential for an early flight demonstration of an engine very similar to one that could be used to support human Mars or other ambitious missions. The flight demonstration could be an important step towards the eventual utilization of NTP.
Identifying Degenerative Brain Disease Using Rough Set Classifier Based on Wavelet Packet Method.
Cheng, Ching-Hsue; Liu, Wei-Xiang
2018-05-28
Population aging has become a worldwide phenomenon, which causes many serious problems. The medical issues related to degenerative brain disease have gradually become a concern. Magnetic Resonance Imaging is one of the most advanced methods for medical imaging and is especially suitable for brain scans. From the literature, although the automatic segmentation method is less laborious and time-consuming, it is restricted in several specific types of images. In addition, hybrid techniques segmentation improves the shortcomings of the single segmentation method. Therefore, this study proposed a hybrid segmentation combined with rough set classifier and wavelet packet method to identify degenerative brain disease. The proposed method is a three-stage image process method to enhance accuracy of brain disease classification. In the first stage, this study used the proposed hybrid segmentation algorithms to segment the brain ROI (region of interest). In the second stage, wavelet packet was used to conduct the image decomposition and calculate the feature values. In the final stage, the rough set classifier was utilized to identify the degenerative brain disease. In verification and comparison, two experiments were employed to verify the effectiveness of the proposed method and compare with the TV-seg (total variation segmentation) algorithm, Discrete Cosine Transform, and the listing classifiers. Overall, the results indicated that the proposed method outperforms the listing methods.
NASA Astrophysics Data System (ADS)
Luo, Yun-Gang; Ko, Jacky Kl; Shi, Lin; Guan, Yuefeng; Li, Linong; Qin, Jing; Heng, Pheng-Ann; Chu, Winnie Cw; Wang, Defeng
2015-07-01
Myocardial iron loading thalassemia patients could be identified using T2* magnetic resonance images (MRI). To quantitatively assess cardiac iron loading, we proposed an effective algorithm to segment aligned free induction decay sequential myocardium images based on morphological operations and geodesic active contour (GAC). Nine patients with thalassemia major were recruited (10 male and 16 female) to undergo a thoracic MRI scan in the short axis view. Free induction decay images were registered for T2* mapping. The GAC were utilized to segment aligned MR images with a robust initialization. Segmented myocardium regions were divided into sectors for a region-based quantification of cardiac iron loading. Our proposed automatic segmentation approach achieve a true positive rate at 84.6% and false positive rate at 53.8%. The area difference between manual and automatic segmentation was 25.5% after 1000 iterations. Results from T2* analysis indicated that regions with intensity lower than 20 ms were suffered from heavy iron loading in thalassemia major patients. The proposed method benefited from abundant edge information of the free induction decay sequential MRI. Experiment results demonstrated that the proposed method is feasible in myocardium segmentation and was clinically applicable to measure myocardium iron loading.
Passive, wireless corrosion sensors for transportation infrastructure.
DOT National Transportation Integrated Search
2011-07-01
Many industrial segments including utilities, manufacturing, government and infrastructure have an urgent need for a means to detect corrosion before significant damage occurs. Transportation infrastructure, such as bridges and roads, rely on reinfor...
Lüddemann, Tobias; Egger, Jan
2016-04-01
Among all types of cancer, gynecological malignancies belong to the fourth most frequent type of cancer among women. In addition to chemotherapy and external beam radiation, brachytherapy is the standard procedure for the treatment of these malignancies. In the progress of treatment planning, localization of the tumor as the target volume and adjacent organs of risks by segmentation is crucial to accomplish an optimal radiation distribution to the tumor while simultaneously preserving healthy tissue. Segmentation is performed manually and represents a time-consuming task in clinical daily routine. This study focuses on the segmentation of the rectum/sigmoid colon as an organ-at-risk in gynecological brachytherapy. The proposed segmentation method uses an interactive, graph-based segmentation scheme with a user-defined template. The scheme creates a directed two-dimensional graph, followed by the minimal cost closed set computation on the graph, resulting in an outlining of the rectum. The graph's outline is dynamically adapted to the last calculated cut. Evaluation was performed by comparing manual segmentations of the rectum/sigmoid colon to results achieved with the proposed method. The comparison of the algorithmic to manual result yielded a dice similarity coefficient value of [Formula: see text], in comparison to [Formula: see text] for the comparison of two manual segmentations by the same physician. Utilizing the proposed methodology resulted in a median time of [Formula: see text], compared to 300 s needed for pure manual segmentation.
Taljaard, Monica; McKenzie, Joanne E; Ramsay, Craig R; Grimshaw, Jeremy M
2014-06-19
An interrupted time series design is a powerful quasi-experimental approach for evaluating effects of interventions introduced at a specific point in time. To utilize the strength of this design, a modification to standard regression analysis, such as segmented regression, is required. In segmented regression analysis, the change in intercept and/or slope from pre- to post-intervention is estimated and used to test causal hypotheses about the intervention. We illustrate segmented regression using data from a previously published study that evaluated the effectiveness of a collaborative intervention to improve quality in pre-hospital ambulance care for acute myocardial infarction (AMI) and stroke. In the original analysis, a standard regression model was used with time as a continuous variable. We contrast the results from this standard regression analysis with those from segmented regression analysis. We discuss the limitations of the former and advantages of the latter, as well as the challenges of using segmented regression in analysing complex quality improvement interventions. Based on the estimated change in intercept and slope from pre- to post-intervention using segmented regression, we found insufficient evidence of a statistically significant effect on quality of care for stroke, although potential clinically important effects for AMI cannot be ruled out. Segmented regression analysis is the recommended approach for analysing data from an interrupted time series study. Several modifications to the basic segmented regression analysis approach are available to deal with challenges arising in the evaluation of complex quality improvement interventions.
Interactive and scale invariant segmentation of the rectum/sigmoid via user-defined templates
NASA Astrophysics Data System (ADS)
Lüddemann, Tobias; Egger, Jan
2016-03-01
Among all types of cancer, gynecological malignancies belong to the 4th most frequent type of cancer among women. Besides chemotherapy and external beam radiation, brachytherapy is the standard procedure for the treatment of these malignancies. In the progress of treatment planning, localization of the tumor as the target volume and adjacent organs of risks by segmentation is crucial to accomplish an optimal radiation distribution to the tumor while simultaneously preserving healthy tissue. Segmentation is performed manually and represents a time-consuming task in clinical daily routine. This study focuses on the segmentation of the rectum/sigmoid colon as an Organ-At-Risk in gynecological brachytherapy. The proposed segmentation method uses an interactive, graph-based segmentation scheme with a user-defined template. The scheme creates a directed two dimensional graph, followed by the minimal cost closed set computation on the graph, resulting in an outlining of the rectum. The graphs outline is dynamically adapted to the last calculated cut. Evaluation was performed by comparing manual segmentations of the rectum/sigmoid colon to results achieved with the proposed method. The comparison of the algorithmic to manual results yielded to a Dice Similarity Coefficient value of 83.85+/-4.08%, in comparison to 83.97+/-8.08% for the comparison of two manual segmentations of the same physician. Utilizing the proposed methodology resulted in a median time of 128 seconds per dataset, compared to 300 seconds needed for pure manual segmentation.
ERIC Educational Resources Information Center
Boughan, Karl
In an effort to better market the college's credit programs and services, Prince George's Community College (PGCC), Mayland, has employed its own tracking system which utilizes a socioeconomic segmentation of their serviceable target population. This approach utilizes U.S. Census data grouping neighborhoods into 24 natural socioeconomic, cultural…
ERIC Educational Resources Information Center
Boughan, Karl
In an effort to better market the college's programs and services, Prince George's Community College (PGCC), Maryland, has employed its own tracking system which utilizes a socioeconomic segmentation of their serviceable target population. This approach utilizes U.S. Census data grouping neighborhoods into natural socioeconomic, cultural, and…
Chaotic dynamics and thermodynamics of periodic systems with long-range forces
NASA Astrophysics Data System (ADS)
Kumar, Pankaj
Gravitational and electromagnetic interactions form the backbone of our theoretical understanding of the universe. While, in general, such interactions are analytically inexpressible for three-dimensional infinite systems, one-dimensional modeling allows one to treat the long-range forces exactly. Not only are one-dimensional systems of profound intrinsic interest, physicists often rely on one-dimensional models as a starting point in the analysis of their more complicated higher-dimensional counterparts. In the analysis of large systems considered in cosmology and plasma physics, periodic boundary conditions are a natural choice and have been utilized in the study of one dimensional Coulombic and gravitational systems. Such studies often employ numerical simulations to validate the theoretical predictions, and in cases where theoretical relations have not been mathematically formulated, numerical simulations serve as a powerful method in characterizing the system's physical properties. In this dissertation, analytic techniques are formulated to express the exact phase-space dynamics of spatially-periodic one-dimensional Coulombic and gravitational systems. Closed-form versions of the Hamiltonian and the electric field are derived for single-component and two-component Coulombic systems, placing the two on the same footing as the gravitational counterpart. Furthermore, it is demonstrated that a three-body variant of the spatially-periodic Coulombic or gravitational system may be reduced isomorphically to a periodic system of a single particle in a two-dimensional rhombic potential. The analytic results are utilized for developing and implementing efficient computational tools to study the dynamical and the thermodynamic properties of the systems without resorting to numerical approximations. Event-driven algorithms are devised to obtain Lyapunov spectra, radial distribution function, pressure, caloric curve, and Poincare surface of section through an N-body molecular-dynamics approach. The simulation results for the three-body systems show that the motion exhibits chaotic, quasiperiodic, and periodic behaviors in segmented regions of the phase space. The results for the large versions of the single-component and two-component Coulombic systems show no clear-cut indication of a phase transition. However, as predicted by the theoretical treatment, the simulated temperature dependencies of energy, pressure as well as Lyapunov exponent for the gravitational system indicate a phase transition and the critical temperature obtained in simulation agrees well with that from the theory.
NASA Astrophysics Data System (ADS)
Wang, Z.; Li, T.; Pan, L.; Kang, Z.
2017-09-01
With increasing attention for the indoor environment and the development of low-cost RGB-D sensors, indoor RGB-D images are easily acquired. However, scene semantic segmentation is still an open area, which restricts indoor applications. The depth information can help to distinguish the regions which are difficult to be segmented out from the RGB images with similar color or texture in the indoor scenes. How to utilize the depth information is the key problem of semantic segmentation for RGB-D images. In this paper, we propose an Encode-Decoder Fully Convolutional Networks for RGB-D image classification. We use Multiple Kernel Maximum Mean Discrepancy (MK-MMD) as a distance measure to find common and special features of RGB and D images in the network to enhance performance of classification automatically. To explore better methods of applying MMD, we designed two strategies; the first calculates MMD for each feature map, and the other calculates MMD for whole batch features. Based on the result of classification, we use the full connect CRFs for the semantic segmentation. The experimental results show that our method can achieve a good performance on indoor RGB-D image semantic segmentation.
NASA Astrophysics Data System (ADS)
Kłeczek, Paweł; Dyduch, Grzegorz; Jaworek-Korjakowska, Joanna; Tadeusiewicz, Ryszard
2017-03-01
Background: Epidermis area is an important observation area for the diagnosis of inflammatory skin diseases and skin cancers. Therefore, in order to develop a computer-aided diagnosis system, segmentation of the epidermis area is usually an essential, initial step. This study presents an automated and robust method for epidermis segmentation in whole slide histopathological images of human skin, stained with hematoxylin and eosin. Methods: The proposed method performs epidermis segmentation based on the information about shape and distribution of transparent regions in a slide image and information about distribution and concentration of hematoxylin and eosin stains. It utilizes domain-specific knowledge of morphometric and biochemical properties of skin tissue elements to segment the relevant histopathological structures in human skin. Results: Experimental results on 88 skin histopathological images from three different sources show that the proposed method segments the epidermis with a mean sensitivity of 87 %, a mean specificity of 95% and a mean precision of 57%. It is robust to inter- and intra-image variations in both staining and illumination, and makes no assumptions about the type of skin disorder. The proposed method provides a superior performance compared to the existing techniques.
Automatic tissue image segmentation based on image processing and deep learning
NASA Astrophysics Data System (ADS)
Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting
2018-02-01
Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies or other novel imaging technologies. Plus, image segmentation also provides detailed structure description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation method. Here we used image enhancement, operators, and morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in a deep learning way. We also introduced parallel computing. Such approaches greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. Our results can be used as a criteria when diagnosing diseases such as cerebral atrophy, which is caused by pathological changes in gray matter or white matter. We demonstrated the great potential of such image processing and deep leaning combined automatic tissue image segmentation in personalized medicine, especially in monitoring, and treatments.
Vargas-Barron, Jesús; Antunez-Montes, Omar-Yassef; Roldán, Francisco-Javier; Aranda-Frausto, Alberto; González-Pacheco, Hector; Romero-Cardenas, Ángel; Zabalgoitia, Miguel
2015-01-01
Torrent-Guasp explains the structure of the ventricular myocardium by means of a helical muscular band. Our primary purpose was to demonstrate the utility of echocardiography in human and porcine hearts in identifying the segments of the myocardial band. The second purpose was to evaluate the relation of the topographic distribution of the myocardial band with some post-myocardial infarction ruptures. Five porcine and one human heart without cardiopathy were dissected and the ventricular myocardial segments were color-coded for illustration and reconstruction purposes. These segments were then correlated to the conventional echocardiographic images. Afterwards in three cases with post-myocardial infarction rupture, a correlation of the topographic location of the rupture with the distribution of the ventricular band was made. The human ventricular band does not show any differences from the porcine band, which confirms the similarities of the four segments; these segments could be identified by echocardiography. In three cases with myocardial rupture, a correlation of the intra-myocardial dissection with the distribution of the ventricular band was observed. Echocardiography is helpful in identifying the myocardial band segments as well as the correlation with the topographic distribution of some myocardial post-infarction ruptures.
[The organization of system of information support of regional health care].
Konovalov, A A
2014-01-01
The comparative analysis was implemented concerning versions of architecture of segment of unified public information system of health care within the framework of the regional program of modernization of Nizhniy Novgorod health care system. The author proposed means of increasing effectiveness of public investments on the basis of analysis of aggregate value of ownership of information system. The evaluation is given concerning running up to target program indicators and dynamics of basic indicators of informatization of institutions of oblast health care system.
Techniques for interpretation of geoid anomalies
NASA Technical Reports Server (NTRS)
Chapman, M. E.
1979-01-01
For purposes of geological interpretation, techniques are developed to compute directly the geoid anomaly over models of density within the earth. Ideal bodies such as line segments, vertical sheets, and rectangles are first used to calculate the geoid anomaly. Realistic bodies are modeled with formulas for two-dimensional polygons and three-dimensional polyhedra. By using Fourier transform methods the two-dimensional geoid is seen to be a filtered version of the gravity field, in which the long-wavelength components are magnified and the short-wavelength components diminished.
1987-01-01
with non-emotional mate- rial . . . . P5. Students who are able to choose from a ’ menu ’ of topics to provide the general con- text of the exercise...smaller version of the videodisc encoded digitally and capable of storing vast numbers of still frames and text files, presents yet another opportunity for...37. En el restaurante , Ramiro pide . a. chorizo y tinto. b. sardinas y vino. c. tortilla y vino. 38. Cuando es t comiendo en el restaurante , Ramiro
Astronaut Heidemarie M. Stefanyshyn-Piper During STS-115 Training
NASA Technical Reports Server (NTRS)
2005-01-01
Wearing a training version of the shuttle launch and entry suit, STS-115 astronaut and mission specialist, Heidemarie M. Stefanyshyn-Piper, puts the final touches on her suit donning process prior to the start of a water survival training session in the Neutral Buoyancy Laboratory (NBL) near Johnson Space Center. Launched on September 9, 2006, the STS-115 mission continued assembly of the International Space Station (ISS) with the installation of the truss segments P3 and P4.
2014-07-09
operations, in addition to laser - or microwave-driven logic gates. Essential shuttling operations are splitting and merging of linear ion crystals. It is...from stray charges, laser induced charging of the trap [19], trap geometry imperfections or residual ponderomotive forces along the trap axis. The...transfer expressed as the mean phonon number Δ ω¯ = n E / f . We distinguish several regimes of laser –ion interaction: (i) if the vibrational
Improved disparity map analysis through the fusion of monocular image segmentations
NASA Technical Reports Server (NTRS)
Perlant, Frederic P.; Mckeown, David M.
1991-01-01
The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.
Latency of TCP applications over the ATM-WAN using the GFR service category
NASA Astrophysics Data System (ADS)
Chen, Kuo-Hsien; Siliquini, John F.; Budrikis, Zigmantas
1998-10-01
The GFR service category has been proposed for data services in ATM networks. Since users are ultimately interested in data service that provide high efficiency and low latency, it is important to study the latency performance for data traffic of the GFR service category in an ATM network. Today much of the data traffic utilizes the TCP/IP protocol suite and in this paper we study through simulation the latency of TCP applications running over a wide-area ATM network utilizing the GFR service category using a realistic TCP traffic model. From this study, we find that during congestion periods the reserved bandwidth in GFR can improve the latency performance for TCP applications. However, due to TCP 'Slow Start' data segment generation dynamics, we show that a large proportion of TCP segments are discarded under network congestion even when the reserved bandwidth is equal to the average generated rate of user data. Therefore, a user experiences worse than expected latency performance when the network is congested. In this study we also examine the effects of segment size on the latency performance of TCP applications using the GFR service category.
Campbell, Ian C.; Coudrillier, Baptiste; Mensah, Johanne; Abel, Richard L.; Ethier, C. Ross
2015-01-01
The lamina cribrosa (LC) is a tissue in the posterior eye with a complex trabecular microstructure. This tissue is of great research interest, as it is likely the initial site of retinal ganglion cell axonal damage in glaucoma. Unfortunately, the LC is difficult to access experimentally, and thus imaging techniques in tandem with image processing have emerged as powerful tools to study the microstructure and biomechanics of this tissue. Here, we present a staining approach to enhance the contrast of the microstructure in micro-computed tomography (micro-CT) imaging as well as a comparison between tissues imaged with micro-CT and second harmonic generation (SHG) microscopy. We then apply a modified version of Frangi's vesselness filter to automatically segment the connective tissue beams of the LC and determine the orientation of each beam. This approach successfully segmented the beams of a porcine optic nerve head from micro-CT in three dimensions and SHG microscopy in two dimensions. As an application of this filter, we present finite-element modelling of the posterior eye that suggests that connective tissue volume fraction is the major driving factor of LC biomechanics. We conclude that segmentation with Frangi's filter is a powerful tool for future image-driven studies of LC biomechanics. PMID:25589572
Wind Evaluation Breadboard electronics and software
NASA Astrophysics Data System (ADS)
Núñez, Miguel; Reyes, Marcos; Viera, Teodora; Zuluaga, Pablo
2008-07-01
WEB, the Wind Evaluation Breadboard, is an Extremely Large Telescope Primary Mirror simulator, developed with the aim of quantifying the ability of a segmented primary mirror to cope with wind disturbances. This instrument supported by the European Community (Framework Programme 6, ELT Design Study), is developed by ESO, IAC, MEDIA-ALTRAN, JUPASA and FOGALE. The WEB is a bench of about 20 tons and 7 meter diameter emulating a segmented primary mirror and its cell, with 7 hexagonal segments simulators, including electromechanical support systems. In this paper we present the WEB central control electronics and the software development which has to interface with: position actuators, auxiliary slave actuators, edge sensors, azimuth ring, elevation actuator, meteorological station and air balloons enclosure. The set of subsystems to control is a reduced version of a real telescope segmented primary mirror control system with high real time performance but emphasizing on development time efficiency and flexibility, because WEB is a test bench. The paper includes a detailed description of hardware and software, paying special attention to real time performance. The Hardware is composed of three computers and the Software architecture has been divided in three intercommunicated applications and they have been implemented using Labview over Windows XP and Pharlap ETS real time operating system. The edge sensors and position actuators close loop has a sampling and commanding frequency of 1KHz.
Integrated Baseline System (IBS) Version 2.0: Utilities Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burford, M.J.; Downing, T.R.; Williams, J.R.
1994-03-01
The Integrated Baseline System (IBS) is an emergency management planning and analysis tool being developed under the direction of the US Army Nuclear and Chemical Agency. This Utilities Guide explains how you can use the IBS utility programs to manage and manipulate various kinds of IBS data. These programs include utilities for creating, editing, and displaying maps and other data that are referenced to geographic location. The intended audience for this document are chiefly data managers but also system managers and some emergency management planners and analysts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Mathew; Bowen, Brian; Coles, Dwight
The Middleware Automated Deployment Utilities consists the these three components: MAD: Utility designed to automate the deployment of java applications to multiple java application servers. The product contains a front end web utility and backend deployment scripts. MAR: Web front end to maintain and update the components inside database. MWR-Encrypt: Web utility to convert a text string to an encrypted string that is used by the Oracle Weblogic application server. The encryption is done using the built in functions if the Oracle Weblogic product and is mainly used to create an encrypted version of a database password.
The Social Mobility-Fertility Hypothesis: A Racial and Class Comparison Among Southern Females.
ERIC Educational Resources Information Center
White, Carolyn Delores
Utilizing data derived from the Southern Youth Study (a six-year, three-wave panel of 528 rural women from the Deep South), the utility of a version of the Social Mobility Fertility Hypothesis was investigated. Specifically, the relationship between mobility attitudes of sophomores and seniors and their subsequent fertility behavior was assessed…
The Utility of the Metacognitive Awareness Inventory for Teachers among In-Service Teachers
ERIC Educational Resources Information Center
Kallio, Heli; Virta, Kalle; Kallio, Manne; Virta, Arja; Hjardemaal, Finn Rudolf; Sandven, Jostein
2017-01-01
The purpose of the present study is to explore the utility of the compressed version of the Metacognitive Awareness Inventory for Teachers (MAIT-18) among in-service teachers. Knowledge of teachers' awareness of metacognition is required to support students' self-regulation, with the aim of establishing modern learning methods and life-long…
Harland, N J; Dawkin, M J; Martin, D
2015-03-01
Patients' subjective impression of change is an important construct to measure following physiotherapy, but little evidence exists about the best type of measure to use. To compare the construct validity and utility of two forms of a global subjective outcome scale (GSOS) in patients with back pain: Likert and visual analogue scale (VAS) GSOS. Two samples of patients attending physiotherapy for back pain completed a questionnaire battery at discharge from physiotherapy including either a Likert or VAS GSOS. One hundred and eighty-seven {79 males, mean age 52.1 [standard deviation (SD) 15.5] years} patients completed the Likert GSOS and a separate sample of 144 patients [62 males, mean age 55.7 (SD 15.9) years] completed the VAS GSOS upon discharge from physiotherapy. The two versions of the GSOS were compared using pre- and post-treatment changes in scores using a VAS (pain), Roland-Morris Disability Questionnaire (18-item version) and catastrophising subscale of the Coping Strategies Questionnaire 24. Both versions of the GSOS showed significant (P<0.01) moderate correlations (r between 0.30 and 0.46) with changes in pain and disability. The correlations between the two types of GSOS and changes in catastrophising were trivial and not significant (Likert GSOS: r=0.07, P=0.372; VAS GSOS: r=0.10, P=0.267). There were fewer missing values in the Likert GSOS (1%) compared with the VAS GSOS (8%). The two versions of the GSOS showed similar validity; however, use of the Likert GSOS is recommended because of its greater utility. Copyright © 2014 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Inexpensive health care reform: the mathematics of medicine.
Forsyth, Roger A
2010-02-01
There is data to support the hypothesis that US healthcare reform will require systemic changes in their delivery system rather than a segment-by-segment approach to improving individual components such as administrative or pharmaceutical costs or illness-by-illness programs such as comparative effectiveness or disease management. Mathematically, personnel costs provide the largest potential for savings. These costs are reflected in utilization rates. However, when governments or insurers try to control utilization, shortages or dissatisfaction ensue. Therefore, reform should be structured to encourage individually initiated reductions in utilization. This can be facilitated by changing from employer-paid comprehensive group policies of variable coverage to a three-part, standardized, individually purchased, group policy with a targeted deductible and co-pays that provide disincentives to over-utilization and incentives (refunds on unused contributions) to reduce utilization. There will be a public health policy (maternal, infant, and immunizations) that will be very inexpensive and not subject to any disincentives, a catastrophic policy with a deductible and enhanced but diminishing co-pays, and a Health Savings Account that pre-positions funds to cover the deductible and co-pays. These changes will lead to a reduction in administrative costs. The excess capacity created will provide care for the currently uninsured. Savings will be refunded to individuals thereby generating taxes that can pay for needed subsidies. Reform can be inexpensive if it puts the mathematics before the politics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garikapati, Venu; Astroza, Sebastian; Pendyala, Ram M.
Travel model systems often adopt a single decision structure that links several activity-travel choices together. The single decision structure is then used to predict activity-travel choices, with those downstream in the decision-making chain influenced by those upstream in the sequence. The adoption of a singular sequential causal structure to depict relationships among activity-travel choices in travel demand model systems ignores the possibility that some choices are made jointly as a bundle as well as the possible presence of structural heterogeneity in the population with respect to decision-making processes. As different segments in the population may adopt and follow different causalmore » decision-making mechanisms when making selected choices jointly, it would be of value to develop simultaneous equations model systems relating multiple endogenous choice variables that are able to identify population subgroups following alternative causal decision structures. Because the segments are not known a priori, they are considered latent and determined endogenously within a joint modeling framework proposed in this paper. The methodology is applied to a national mobility survey data set to identify population segments that follow different causal structures relating residential location choice, vehicle ownership, and car-share and mobility service usage. It is found that the model revealing three distinct latent segments best describes the data, confirming the efficacy of the modeling approach and the existence of structural heterogeneity in decision-making in the population. Future versions of activity-travel model systems should strive to incorporate such structural heterogeneity to better reflect varying decision processes across population subgroups.« less
Neves, Felipe Silva; Leandro, Danielle Aparecida Barbosa; Silva, Fabiana Almeida da; Netto, Michele Pereira; Oliveira, Renata Maria Souza; Cândido, Ana Paula Carlos
2015-01-01
To analyze the predictive capacity of the vertical segmental tetrapolar bioimpedance apparatus in the detection of excess weight in adolescents, using tetrapolar bioelectrical impedance as a reference. This was a cross-sectional study conducted with 411 students aged between 10 and 14 years, of both genders, enrolled in public and private schools, selected by a simple and stratified random sampling process according to the gender, age, and proportion in each institution. The sample was evaluated by the anthropometric method and underwent a body composition analysis using vertical bipolar, horizontal tetrapolar, and vertical segmental tetrapolar assessment. The ROC curve was constructed based on calculations of sensitivity and specificity for each point of the different possible measurements of body fat. The statistical analysis used Student's t-test, Pearson's correlation coefficient, and McNemar's chi-squared test. Subsequently, the variables were interpreted using SPSS software, version 17.0. Of the total sample, 53.7% were girls and 46.3%, boys. Of the total, 20% and 12.5% had overweight and obesity, respectively. The body segment measurement charts showed high values of sensitivity and specificity and high areas under the ROC curve, ranging from 0.83 to 0.95 for girls and 0.92 to 0.98 for boys, suggesting a slightly higher performance for the male gender. Body fat percentage was the most efficient criterion to detect overweight, while the trunk segmental fat was the least accurate indicator. The apparatus demonstrated good performance to predict excess weight. Copyright © 2015 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
An SPM12 extension for multiple sclerosis lesion segmentation
NASA Astrophysics Data System (ADS)
Roura, Eloy; Oliver, Arnau; Cabezas, Mariano; Valverde, Sergi; Pareto, Deborah; Vilanova, Joan C.; Ramió-Torrentà, Lluís.; Rovira, Àlex; Lladó, Xavier
2016-03-01
Purpose: Magnetic resonance imaging is nowadays the hallmark to diagnose multiple sclerosis (MS), characterized by white matter lesions. Several approaches have been recently presented to tackle the lesion segmentation problem, but none of them have been accepted as a standard tool in the daily clinical practice. In this work we present yet another tool able to automatically segment white matter lesions outperforming the current-state-of-the-art approaches. Methods: This work is an extension of Roura et al. [1], where external and platform dependent pre-processing libraries (brain extraction, noise reduction and intensity normalization) were required to achieve an optimal performance. Here we have updated and included all these required pre-processing steps into a single framework (SPM software). Therefore, there is no need of external tools to achieve the desired segmentation results. Besides, we have changed the working space from T1w to FLAIR, reducing interpolation errors produced in the registration process from FLAIR to T1w space. Finally a post-processing constraint based on shape and location has been added to reduce false positive detections. Results: The evaluation of the tool has been done on 24 MS patients. Qualitative and quantitative results are shown with both approaches in terms of lesion detection and segmentation. Conclusion: We have simplified both installation and implementation of the approach, providing a multiplatform tool1 integrated into the SPM software, which relies only on using T1w and FLAIR images. We have reduced with this new version the computation time of the previous approach while maintaining the performance.
Khanal, Laxman; Shah, Sandip; Koirala, Sarun
2017-03-01
Length of long bones is taken as an important contributor for estimating one of the four elements of forensic anthropology i.e., stature of the individual. Since physical characteristics of the individual differ among different groups of population, population specific studies are needed for estimating the total length of femur from its segment measurements. Since femur is not always recovered intact in forensic cases, it was the aim of this study to derive regression equations from measurements of proximal and distal fragments in Nepalese population. A cross-sectional study was done among 60 dry femora (30 from each side) without sex determination in anthropometry laboratory. Along with maximum femoral length, four proximal and four distal segmental measurements were measured following the standard method with the help of osteometric board, measuring tape and digital Vernier's caliper. Bones with gross defects were excluded from the study. Measured values were recorded separately for right and left side. Statistical Package for Social Science (SPSS version 11.5) was used for statistical analysis. The value of segmental measurements were different between right and left side but statistical difference was not significant except for depth of medial condyle (p=0.02). All the measurements were positively correlated and found to have linear relationship with the femoral length. With the help of regression equation, femoral length can be calculated from the segmental measurements; and then femoral length can be used to calculate the stature of the individual. The data collected may contribute in the analysis of forensic bone remains in study population.
Automated measurement of uptake in cerebellum, liver, and aortic arch in full-body FDG PET/CT scans.
Bauer, Christian; Sun, Shanhui; Sun, Wenqing; Otis, Justin; Wallace, Audrey; Smith, Brian J; Sunderland, John J; Graham, Michael M; Sonka, Milan; Buatti, John M; Beichel, Reinhard R
2012-06-01
The purpose of this work was to develop and validate fully automated methods for uptake measurement of cerebellum, liver, and aortic arch in full-body PET/CT scans. Such measurements are of interest in the context of uptake normalization for quantitative assessment of metabolic activity and/or automated image quality control. Cerebellum, liver, and aortic arch regions were segmented with different automated approaches. Cerebella were segmented in PET volumes by means of a robust active shape model (ASM) based method. For liver segmentation, a largest possible hyperellipsoid was fitted to the liver in PET scans. The aortic arch was first segmented in CT images of a PET/CT scan by a tubular structure analysis approach, and the segmented result was then mapped to the corresponding PET scan. For each of the segmented structures, the average standardized uptake value (SUV) was calculated. To generate an independent reference standard for method validation, expert image analysts were asked to segment several cross sections of each of the three structures in 134 F-18 fluorodeoxyglucose (FDG) PET/CT scans. For each case, the true average SUV was estimated by utilizing statistical models and served as the independent reference standard. For automated aorta and liver SUV measurements, no statistically significant scale or shift differences were observed between automated results and the independent standard. In the case of the cerebellum, the scale and shift were not significantly different, if measured in the same cross sections that were utilized for generating the reference. In contrast, automated results were scaled 5% lower on average although not shifted, if FDG uptake was calculated from the whole segmented cerebellum volume. The estimated reduction in total SUV measurement error ranged between 54.7% and 99.2%, and the reduction was found to be statistically significant for cerebellum and aortic arch. With the proposed methods, the authors have demonstrated that automated SUV uptake measurements in cerebellum, liver, and aortic arch agree with expert-defined independent standards. The proposed methods were found to be accurate and showed less intra- and interobserver variability, compared to manual analysis. The approach provides an alternative to manual uptake quantification, which is time-consuming. Such an approach will be important for application of quantitative PET imaging to large scale clinical trials. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Wahi-Anwar, M. Wasil; Emaminejad, Nastaran; Hoffman, John; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael F.
2018-02-01
Quantitative imaging in lung cancer CT seeks to characterize nodules through quantitative features, usually from a region of interest delineating the nodule. The segmentation, however, can vary depending on segmentation approach and image quality, which can affect the extracted feature values. In this study, we utilize a fully-automated nodule segmentation method - to avoid reader-influenced inconsistencies - to explore the effects of varied dose levels and reconstruction parameters on segmentation. Raw projection CT images from a low-dose screening patient cohort (N=59) were reconstructed at multiple dose levels (100%, 50%, 25%, 10%), two slice thicknesses (1.0mm, 0.6mm), and a medium kernel. Fully-automated nodule detection and segmentation was then applied, from which 12 nodules were selected. Dice similarity coefficient (DSC) was used to assess the similarity of the segmentation ROIs of the same nodule across different reconstruction and dose conditions. Nodules at 1.0mm slice thickness and dose levels of 25% and 50% resulted in DSC values greater than 0.85 when compared to 100% dose, with lower dose leading to a lower average and wider spread of DSC values. At 0.6mm, the increased bias and wider spread of DSC values from lowering dose were more pronounced. The effects of dose reduction on DSC for CAD-segmented nodules were similar in magnitude to reducing the slice thickness from 1.0mm to 0.6mm. In conclusion, variation of dose and slice thickness can result in very different segmentations because of noise and image quality. However, there exists some stability in segmentation overlap, as even at 1mm, an image with 25% of the lowdose scan still results in segmentations similar to that seen in a full-dose scan.
Kim, Eun Young; Magnotta, Vincent A; Liu, Dawei; Johnson, Hans J
2014-09-01
Machine learning (ML)-based segmentation methods are a common technique in the medical image processing field. In spite of numerous research groups that have investigated ML-based segmentation frameworks, there remains unanswered aspects of performance variability for the choice of two key components: ML algorithm and intensity normalization. This investigation reveals that the choice of those elements plays a major part in determining segmentation accuracy and generalizability. The approach we have used in this study aims to evaluate relative benefits of the two elements within a subcortical MRI segmentation framework. Experiments were conducted to contrast eight machine-learning algorithm configurations and 11 normalization strategies for our brain MR segmentation framework. For the intensity normalization, a Stable Atlas-based Mapped Prior (STAMP) was utilized to take better account of contrast along boundaries of structures. Comparing eight machine learning algorithms on down-sampled segmentation MR data, it was obvious that a significant improvement was obtained using ensemble-based ML algorithms (i.e., random forest) or ANN algorithms. Further investigation between these two algorithms also revealed that the random forest results provided exceptionally good agreement with manual delineations by experts. Additional experiments showed that the effect of STAMP-based intensity normalization also improved the robustness of segmentation for multicenter data sets. The constructed framework obtained good multicenter reliability and was successfully applied on a large multicenter MR data set (n>3000). Less than 10% of automated segmentations were recommended for minimal expert intervention. These results demonstrate the feasibility of using the ML-based segmentation tools for processing large amount of multicenter MR images. We demonstrated dramatically different result profiles in segmentation accuracy according to the choice of ML algorithm and intensity normalization chosen. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hamraz, Hamid; Contreras, Marco A.; Zhang, Jun
2017-08-01
Airborne LiDAR point cloud representing a forest contains 3D data, from which vertical stand structure even of understory layers can be derived. This paper presents a tree segmentation approach for multi-story stands that stratifies the point cloud to canopy layers and segments individual tree crowns within each layer using a digital surface model based tree segmentation method. The novelty of the approach is the stratification procedure that separates the point cloud to an overstory and multiple understory tree canopy layers by analyzing vertical distributions of LiDAR points within overlapping locales. The procedure does not make a priori assumptions about the shape and size of the tree crowns and can, independent of the tree segmentation method, be utilized to vertically stratify tree crowns of forest canopies. We applied the proposed approach to the University of Kentucky Robinson Forest - a natural deciduous forest with complex and highly variable terrain and vegetation structure. The segmentation results showed that using the stratification procedure strongly improved detecting understory trees (from 46% to 68%) at the cost of introducing a fair number of over-segmented understory trees (increased from 1% to 16%), while barely affecting the overall segmentation quality of overstory trees. Results of vertical stratification of the canopy showed that the point density of understory canopy layers were suboptimal for performing a reasonable tree segmentation, suggesting that acquiring denser LiDAR point clouds would allow more improvements in segmenting understory trees. As shown by inspecting correlations of the results with forest structure, the segmentation approach is applicable to a variety of forest types.
An experimental investigation of fault tolerant software structures in an avionics application
NASA Technical Reports Server (NTRS)
Caglayan, Alper K.; Eckhardt, Dave E., Jr.
1989-01-01
The objective of this experimental investigation is to compare the functional performance and software reliability of competing fault tolerant software structures utilizing software diversity. In this experiment, three versions of the redundancy management software for a skewed sensor array have been developed using three diverse failure detection and isolation algorithms and incorporated into various N-version, recovery block and hybrid software structures. The empirical results show that, for maximum functional performance improvement in the selected application domain, the results of diverse algorithms should be voted before being processed by multiple versions without enforced diversity. Results also suggest that when the reliability gain with an N-version structure is modest, recovery block structures are more feasible since higher reliability can be obtained using an acceptance check with a modest reliability.
New automatic mode of visualizing the colon via Cine CT
NASA Astrophysics Data System (ADS)
Udupa, Jayaram K.; Odhner, Dewey; Eisenberg, Harvey C.
2001-05-01
Methods of visualizing the inner colonic wall by using CT images has actively been pursued in recent years in an attempt to eventually replace conventional colonoscopic examination. In spite of impressive progress in this direction, there are still several problems, which need satisfactory solutions. Among these, we address three problems in this paper: segmentation, coverage, and speed of rendering. Instead of thresholding, we utilize the fuzzy connectedness framework to segment the colonic wall. Instead of the endoscopic viewing mode and various mapping techniques, we utilize the central line through the colon to generate automatically viewing directions that are enface with respect to the colon wall, thereby avoiding blind spots in viewing. We utilize some modifications of the ultra fast shell rendering framework to ensure fast rendering speed. The combined effect of these developments is that a colon study requires an initial 5 minutes of operator time plus an additional 5 minutes of computational time and subsequently enface renditions are created in real time (15 frames/sec) on a 1 GHz Pentium PC under the Linux operating system.
Smeared spectrum jamming suppression based on generalized S transform and threshold segmentation
NASA Astrophysics Data System (ADS)
Li, Xin; Wang, Chunyang; Tan, Ming; Fu, Xiaolong
2018-04-01
Smeared Spectrum (SMSP) jamming is an effective jamming in countering linear frequency modulation (LFM) radar. According to the time-frequency distribution difference between jamming and echo, a jamming suppression method based on Generalized S transform (GST) and threshold segmentation is proposed. The sub-pulse period is firstly estimated based on auto correlation function firstly. Secondly, the time-frequency image and the related gray scale image are achieved based on GST. Finally, the Tsallis cross entropy is utilized to compute the optimized segmentation threshold, and then the jamming suppression filter is constructed based on the threshold. The simulation results show that the proposed method is of good performance in the suppression of false targets produced by SMSP.
Video Comprehensibility and Attention in Very Young Children
Pempek, Tiffany A.; Kirkorian, Heather L.; Richards, John E.; Anderson, Daniel R.; Lund, Anne F.; Stevens, Michael
2010-01-01
Earlier research established that preschool children pay less attention to television that is sequentially or linguistically incomprehensible. This study determines the youngest age for which this effect can be found. One-hundred and three 6-, 12-, 18-, and 24-month-olds’ looking and heart rate were recorded while they watched Teletubbies, a television program designed for very young children. Comprehensibility was manipulated by either randomly ordering shots or reversing dialogue to become backward speech. Infants watched one normal segment and one distorted version of the same segment. Only 24-month-olds, and to some extent 18-month-olds, distinguished between normal and distorted video by looking for longer durations towards the normal stimuli. The results suggest that it may not be until the middle of the second year that children demonstrate the earliest beginnings of comprehension of video as it is currently produced. PMID:20822238
Being there is important, but getting there matters too: the role of path in the valuation process.
Goldberg, Julie H
2006-01-01
Traditional decision-analytic models presume that utilities are invariant to context. The influence of 2 types of context on patients' utility assessments was examined here the path by which one reaches a health state and personal experience with a health state. Three groups of patients were interviewed: men older than age 49 years with prostate cancer but no diabetes (CaP), diabetes but no prostate cancer (DM), and neither disease (ND). The utility of erectile dysfunction (ED) was assessed using a standard gamble (SG). Each subject completed 2 SGs: 1) a no-context version that gave no explanation for the cause of ED and 2) a contextualized version in which prostate cancer treatment, the failure to manage diabetes, or the natural course of aging was said to be the cause. Patients with disease assigned higher utilities to ED in a matching context than in discrepant contexts. Regression models found that the valuation process was also sensitive to the match between disease path in the utility assessment and patients' personal experiences. These findings lend insight into why acontextual utility assessments typically used in decision analyses have not been able to predict patient behavior as well as expected. The valuation process appears to change systematically when context is specified, suggesting that unspecified contexts rather than random error may lead to fluctuations in the values assigned to identical health states.
ChEMBL web services: streamlining access to drug discovery data and utilities.
Davies, Mark; Nowotka, Michał; Papadatos, George; Dedman, Nathan; Gaulton, Anna; Atkinson, Francis; Bellis, Louisa; Overington, John P
2015-07-01
ChEMBL is now a well-established resource in the fields of drug discovery and medicinal chemistry research. The ChEMBL database curates and stores standardized bioactivity, molecule, target and drug data extracted from multiple sources, including the primary medicinal chemistry literature. Programmatic access to ChEMBL data has been improved by a recent update to the ChEMBL web services (version 2.0.x, https://www.ebi.ac.uk/chembl/api/data/docs), which exposes significantly more data from the underlying database and introduces new functionality. To complement the data-focused services, a utility service (version 1.0.x, https://www.ebi.ac.uk/chembl/api/utils/docs), which provides RESTful access to commonly used cheminformatics methods, has also been concurrently developed. The ChEMBL web services can be used together or independently to build applications and data processing workflows relevant to drug discovery and chemical biology. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Automated segmentation of murine lung tumors in x-ray micro-CT images
NASA Astrophysics Data System (ADS)
Swee, Joshua K. Y.; Sheridan, Clare; de Bruin, Elza; Downward, Julian; Lassailly, Francois; Pizarro, Luis
2014-03-01
Recent years have seen micro-CT emerge as a means of providing imaging analysis in pre-clinical study, with in-vivo micro-CT having been shown to be particularly applicable to the examination of murine lung tumors. Despite this, existing studies have involved substantial human intervention during the image analysis process, with the use of fully-automated aids found to be almost non-existent. We present a new approach to automate the segmentation of murine lung tumors designed specifically for in-vivo micro-CT-based pre-clinical lung cancer studies that addresses the specific requirements of such study, as well as the limitations human-centric segmentation approaches experience when applied to such micro-CT data. Our approach consists of three distinct stages, and begins by utilizing edge enhancing and vessel enhancing non-linear anisotropic diffusion filters to extract anatomy masks (lung/vessel structure) in a pre-processing stage. Initial candidate detection is then performed through ROI reduction utilizing obtained masks and a two-step automated segmentation approach that aims to extract all disconnected objects within the ROI, and consists of Otsu thresholding, mathematical morphology and marker-driven watershed. False positive reduction is finally performed on initial candidates through random-forest-driven classification using the shape, intensity, and spatial features of candidates. We provide validation of our approach using data from an associated lung cancer study, showing favorable results both in terms of detection (sensitivity=86%, specificity=89%) and structural recovery (Dice Similarity=0.88) when compared against manual specialist annotation.
Calculating Trajectories And Orbits
NASA Technical Reports Server (NTRS)
Alderson, Daniel J.; Brady, Franklyn H.; Breckheimer, Peter J.; Campbell, James K.; Christensen, Carl S.; Collier, James B.; Ekelund, John E.; Ellis, Jordan; Goltz, Gene L.; Hintz, Gerarld R.;
1989-01-01
Double-Precision Trajectory Analysis Program, DPTRAJ, and Orbit Determination Program, ODP, developed and improved over years to provide highly reliable and accurate navigation capability for deep-space missions like Voyager. Each collection of programs working together to provide desired computational results. DPTRAJ, ODP, and supporting utility programs capable of handling massive amounts of data and performing various numerical calculations required for solving navigation problems associated with planetary fly-by and lander missions. Used extensively in support of NASA's Voyager project. DPTRAJ-ODP available in two machine versions. UNIVAC version, NPO-15586, written in FORTRAN V, SFTRAN, and ASSEMBLER. VAX/VMS version, NPO-17201, written in FORTRAN V, SFTRAN, PL/1 and ASSEMBLER.
Cache and energy efficient algorithms for Nussinov's RNA Folding.
Zhao, Chunchun; Sahni, Sartaj
2017-12-06
An RNA folding/RNA secondary structure prediction algorithm determines the non-nested/pseudoknot-free structure by maximizing the number of complementary base pairs and minimizing the energy. Several implementations of Nussinov's classical RNA folding algorithm have been proposed. Our focus is to obtain run time and energy efficiency by reducing the number of cache misses. Three cache-efficient algorithms, ByRow, ByRowSegment and ByBox, for Nussinov's RNA folding are developed. Using a simple LRU cache model, we show that the Classical algorithm of Nussinov has the highest number of cache misses followed by the algorithms Transpose (Li et al.), ByRow, ByRowSegment, and ByBox (in this order). Extensive experiments conducted on four computational platforms-Xeon E5, AMD Athlon 64 X2, Intel I7 and PowerPC A2-using two programming languages-C and Java-show that our cache efficient algorithms are also efficient in terms of run time and energy. Our benchmarking shows that, depending on the computational platform and programming language, either ByRow or ByBox give best run time and energy performance. The C version of these algorithms reduce run time by as much as 97.2% and energy consumption by as much as 88.8% relative to Classical and by as much as 56.3% and 57.8% relative to Transpose. The Java versions reduce run time by as much as 98.3% relative to Classical and by as much as 75.2% relative to Transpose. Transpose achieves run time and energy efficiency at the expense of memory as it takes twice the memory required by Classical. The memory required by ByRow, ByRowSegment, and ByBox is the same as that of Classical. As a result, using the same amount of memory, the algorithms proposed by us can solve problems up to 40% larger than those solvable by Transpose.
Wooten, H. Omar; Green, Olga; Li, Harold H.; Liu, Shi; Li, Xiaoling; Rodriguez, Vivian; Mutic, Sasa; Kashani, Rojano
2016-01-01
The aims of this study were to develop a method for automatic and immediate verification of treatment delivery after each treatment fraction in order to detect and correct errors, and to develop a comprehensive daily report which includes delivery verification results, daily image‐guided radiation therapy (IGRT) review, and information for weekly physics reviews. After systematically analyzing the requirements for treatment delivery verification and understanding the available information from a commercial MRI‐guided radiotherapy treatment machine, we designed a procedure to use 1) treatment plan files, 2) delivery log files, and 3) beam output information to verify the accuracy and completeness of each daily treatment delivery. The procedure verifies the correctness of delivered treatment plan parameters including beams, beam segments and, for each segment, the beam‐on time and MLC leaf positions. For each beam, composite primary fluence maps are calculated from the MLC leaf positions and segment beam‐on time. Error statistics are calculated on the fluence difference maps between the plan and the delivery. A daily treatment delivery report is designed to include all required information for IGRT and weekly physics reviews including the plan and treatment fraction information, daily beam output information, and the treatment delivery verification results. A computer program was developed to implement the proposed procedure of the automatic delivery verification and daily report generation for an MRI guided radiation therapy system. The program was clinically commissioned. Sensitivity was measured with simulated errors. The final version has been integrated into the commercial version of the treatment delivery system. The method automatically verifies the EBRT treatment deliveries and generates the daily treatment reports. Already in clinical use for over one year, it is useful to facilitate delivery error detection, and to expedite physician daily IGRT review and physicist weekly chart review. PACS number(s): 87.55.km PMID:27167269
ERIC Educational Resources Information Center
Dawkins, Tamara; Meyer, Allison T.; Van Bourgondien, Mary E.
2016-01-01
"The Childhood Autism Rating Scale, Second Edition" (CARS2; 2010) includes two rating scales; the CARS2-Standard Version (CARS2-ST) and the newly developed CARS2-High Functioning Version (CARS2-HF). To assess the diagnostic agreement between the CARS2 and DSM-IV-TR versus DSM-5 criteria for Autism Spectrum Disorder (ASD), clinicians at…
NASA Technical Reports Server (NTRS)
Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.
1988-01-01
The Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC) on the Use of the Geostationary Satellite Orbit and the Planning of Space Services Utilizing It. Through careful selection of the predetermined arc (PDA) for each administration, flexibility can be increased in terms of choice of system technical characteristics and specific orbit location while reducing the need for coordination among administrations. The NASARC software determines pairwise compatibility between all possible service areas at discrete arc locations. NASARC then exhaustively enumerates groups of administrations whose satellites can be closely located in orbit, and finds the arc segment over which each such compatible group exists. From the set of all possible compatible groupings, groups and their associated arc segments are selected using a heuristic procedure such that a PDA is identified for each administration. Various aspects of the NASARC concept and how the software accomplishes specific features of allotment planning are discussed.
Host polymer influence on dilute polystyrene segmental dynamics
NASA Astrophysics Data System (ADS)
Lutz, T. R.
2005-03-01
We have utilized deuterium NMR to investigate the segmental dynamics of dilute (2%) d3-polystyrene (PS) chains in miscible polymer blends with polybutadiene, poly(vinyl ethylene), polyisoprene, poly(vinyl methylether) and poly(methyl methacrylate). In the dilute limit, we find qualitative differences depending upon whether the host polymer has dynamics that are faster or slower than that of pure PS. In blends where PS is the fast (low Tg) component, segmental dynamics are slowed upon blending and can be fit by the Lodge-McLeish model. When PS is the slow (high Tg) component, PS segmental dynamics speed up upon blending, but cannot be fit by the Lodge-McLeish model unless a temperature dependent self-concentration is employed. These results are qualitatively consistent with a recent suggestion by Kant, Kumar and Colby (Macromolecules, 2003, 10087), based upon data at higher concentrations. Furthermore, as the slow component, we find the segmental dynamics of PS has a temperature dependence similar to that of its host. This suggests viewing the high Tg component dynamics in a miscible blend as similar to a polymer in a low molecular weight solvent.
Multiscale CNNs for Brain Tumor Segmentation and Diagnosis.
Zhao, Liya; Jia, Kebin
2016-01-01
Early brain tumor detection and diagnosis are critical to clinics. Thus segmentation of focused tumor area needs to be accurate, efficient, and robust. In this paper, we propose an automatic brain tumor segmentation method based on Convolutional Neural Networks (CNNs). Traditional CNNs focus only on local features and ignore global region features, which are both important for pixel classification and recognition. Besides, brain tumor can appear in any place of the brain and be any size and shape in patients. We design a three-stream framework named as multiscale CNNs which could automatically detect the optimum top-three scales of the image sizes and combine information from different scales of the regions around that pixel. Datasets provided by Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized by MICCAI 2013 are utilized for both training and testing. The designed multiscale CNNs framework also combines multimodal features from T1, T1-enhanced, T2, and FLAIR MRI images. By comparison with traditional CNNs and the best two methods in BRATS 2012 and 2013, our framework shows advances in brain tumor segmentation accuracy and robustness.
Axial segmentation of lungs CT scan images using canny method and morphological operation
NASA Astrophysics Data System (ADS)
Noviana, Rina; Febriani, Rasal, Isram; Lubis, Eva Utari Cintamurni
2017-08-01
Segmentation is a very important topic in digital image process. It is found simply in varied fields of image analysis, particularly within the medical imaging field. Axial segmentation of lungs CT scan is beneficial in designation of abnormalities and surgery planning. It will do to ascertain every section within the lungs. The results of the segmentation are accustomed discover the presence of nodules. The method which utilized in this analysis are image cropping, image binarization, Canny edge detection and morphological operation. Image cropping is done so as to separate the lungs areas, that is the region of interest. Binarization method generates a binary image that has 2 values with grey level, that is black and white (ROI), from another space of lungs CT scan image. Canny method used for the edge detection. Morphological operation is applied to smoothing the lungs edge. The segmentation methodology shows an honest result. It obtains an awfully smooth edge. Moreover, the image background can also be removed in order to get the main focus, the lungs.
Energy-efficient rings mechanism for greening multisegment fiber-wireless access networks
NASA Astrophysics Data System (ADS)
Gong, Xiaoxue; Guo, Lei; Hou, Weigang; Zhang, Lincong
2013-07-01
Through integrating advantages of optical and wireless communications, the Fiber-Wireless (FiWi) has become a promising solution for the "last-mile" broadband access. In particular, greening FiWi has attained extensive attention, because the access network is a main energy contributor in the whole infrastructure. However, prior solutions of greening FiWi shut down or sleep unused/minimally used optical network units for a single segment, where we deploy only one optical linear terminal. We propose a green mechanism referred to as energy-efficient ring (EER) for multisegment FiWi access networks. We utilize an integer linear programming model and a generic algorithm to generate clusters, each having the shortest distance of fully connected segments of its own. Leveraging the backtracking method for each cluster, we then connect segments through fiber links, and the shortest distance fiber ring is constructed. Finally, we sleep low load segments and forward affected traffic to other active segments on the same fiber ring by our sleeping scheme. Experimental results show that our EER mechanism significantly reduces the energy consumption at the slightly additional cost of deploying fiber links.
Armson, J; Stuart, A
1998-06-01
An ABA time series design was used to examine the effect of extended, continuous exposure to frequency-altered auditory feedback (FAF) during an oral reading and monologue task on stuttering frequency and speech rate. Twelve adults who stutter participated. A statistically significant decrease in number of stuttering events, an increase in number of syllables produced, and a decrease in percent stuttering was observed during the experimental segment relative to baseline segments for the oral reading task. In the monologue task, there were no statistically significant differences for the number of stuttering events, number of syllables produced, or percent stuttering between the experimental and baseline segments. Varying individual patterns of response to FAF were evident during the experimental segment of the reading task: a large consistent reduction in stuttering, an initial reduction followed by fluctuations in amount of stuttering, and essentially no change in stuttering frequency. Ten of 12 participants showed no reduction in stuttering frequency during the experimental segment of the monologue task. These findings have ramifications both for the clinical utilization of FAF and for theoretical explanations of fluency-enhancement.
NASA Astrophysics Data System (ADS)
Dong, Huaipeng; Zhang, Qi; Shi, Jun
2017-12-01
Magnetic resonance (MR) images suffer from intensity inhomogeneity. Segmentation-based approaches can simultaneously achieve both intensity inhomogeneity compensation (IIC) and tissue segmentation for MR images with little noise, but they often fail for images polluted by severe noise. Here, we propose a noise-robust algorithm named noise-suppressed multiplicative intrinsic component optimization (NSMICO) for simultaneous IIC and tissue segmentation. Considering the spatial characteristics in an image, an adaptive nonlocal means filtering term is incorporated into the objective function of NSMICO to decrease image deterioration due to noise. Then, a fuzzy local factor term utilizing the spatial and gray-level relationship among local pixels is embedded into the objective function to reach a balance between noise suppression and detail preservation. Experimental results on synthetic natural and MR images with various levels of intensity inhomogeneity and noise, as well as in vivo clinical MR images, have demonstrated the effectiveness of the NSMICO and its superiority to three competing approaches. The NSMICO could be potentially valuable for MR image IIC and tissue segmentation.
Coronagraphic Wavefront Control for the ATLAST-9.2m Telescope
NASA Technical Reports Server (NTRS)
Lyon, RIchard G.; Oegerle, William R.; Feinberg, Lee D.; Bolcar, Matthew R.; Dean, Bruce H.; Mosier, Gary E.; Postman, Marc
2010-01-01
The Advanced Technology for Large Aperture Space Telescope (ATLAST) concept was assessed as one of the NASA Astrophysics Strategic Mission Concepts (ASMC) studies. Herein we discuss the 9.2-meter diameter segmented aperture version and its wavefront sensing and control (WFSC) with regards to coronagraphic detection and spectroscopic characterization of exoplanets. The WFSC would consist of at least two levels of sensing and control: (i) an outer coarser level of sensing and control to phase and control the segments and secondary mirror in a manner similar to the James Webb Space Telescope but operating at higher temporal bandwidth, and (ii) an inner, coronagraphic instrument based, fine level of sensing and control for both amplitude and wavefront errors operating at higher temporal bandwidths. The outer loop would control rigid-body actuators on the primary and secondary mirrors while the inner loop would control one or more segmented deformable mirror to suppress the starlight within the coronagraphic field-of view. Herein we discuss the visible nulling coronagraph (VNC) and the requirements it levies on wavefront sensing and control and show the results of closed-loop simulations to assess performance and evaluate the trade space of system level stability versus control bandwidth.
3D marker-controlled watershed for kidney segmentation in clinical CT exams.
Wieclawek, Wojciech
2018-02-27
Image segmentation is an essential and non trivial task in computer vision and medical image analysis. Computed tomography (CT) is one of the most accessible medical examination techniques to visualize the interior of a patient's body. Among different computer-aided diagnostic systems, the applications dedicated to kidney segmentation represent a relatively small group. In addition, literature solutions are verified on relatively small databases. The goal of this research is to develop a novel algorithm for fully automated kidney segmentation. This approach is designed for large database analysis including both physiological and pathological cases. This study presents a 3D marker-controlled watershed transform developed and employed for fully automated CT kidney segmentation. The original and the most complex step in the current proposition is an automatic generation of 3D marker images. The final kidney segmentation step is an analysis of the labelled image obtained from marker-controlled watershed transform. It consists of morphological operations and shape analysis. The implementation is conducted in a MATLAB environment, Version 2017a, using i.a. Image Processing Toolbox. 170 clinical CT abdominal studies have been subjected to the analysis. The dataset includes normal as well as various pathological cases (agenesis, renal cysts, tumors, renal cell carcinoma, kidney cirrhosis, partial or radical nephrectomy, hematoma and nephrolithiasis). Manual and semi-automated delineations have been used as a gold standard. Wieclawek Among 67 delineated medical cases, 62 cases are 'Very good', whereas only 5 are 'Good' according to Cohen's Kappa interpretation. The segmentation results show that mean values of Sensitivity, Specificity, Dice, Jaccard, Cohen's Kappa and Accuracy are 90.29, 99.96, 91.68, 85.04, 91.62 and 99.89% respectively. All 170 medical cases (with and without outlines) have been classified by three independent medical experts as 'Very good' in 143-148 cases, as 'Good' in 15-21 cases and as 'Moderate' in 6-8 cases. An automatic kidney segmentation approach for CT studies to compete with commonly known solutions was developed. The algorithm gives promising results, that were confirmed during validation procedure done on a relatively large database, including 170 CTs with both physiological and pathological cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Li; Gao, Yaozong; Shi, Feng
Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segmentmore » CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT segmentation based on 15 patients.« less
Draft New Home Specification Version 1.1
The intent of this specification is to reduce indoor and outdoor water usage in new residential homes, thereby lowering consumer utility bills and encouraging water and wastewater.infrastructure savings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Omari, E; Noid, G; Ehlers, C
Purpose: Substantial target motion during the delivery of radiation therapy (RT) for pancreatic cancer is well recognized as a major limiting factor on RT effectiveness. The aim of this work is to monitor intra-fractional motion of the pancreas using ultrasound during RT delivery. Methods: Transabdominal Ultrasound B-mode images were collected from 5 volunteers using a research version of the Clarity Autoscan System (Elekta). The autoscan transducer with center frequency of 5 MHz was utilized for the scans. Imaging parameters were adjusted to acquire images at the desired depth with good contrast and a wide sweep angle. Since well-defined boundaries ofmore » the pancreas can be difficult to find on ultrasound B-mode images, the portal vein was selected as a surrogate for motion estimation of the head of the pancreas. The selection was due to its anatomical location posterior to the neck of the pancreas and close proximity to the pancreas head. The portal vein was contoured on the ultrasound images acquired during simulation using the Clarity Research AFC Workstation software. Volunteers were set up in a similar manner to the simulation for their monitoring session and the ultrasound transducer was mounted on an arm fixed to the couch. A video segment of the portal vein motion was captured. Results: The portal vein was visualized and segmented. Successful monitoring sessions of the portal vein were observed. In addition, our results showed that the ultrasound transducer itself reduces breathing related motion. This is analogous to the use of a compression plate to suppress respiration motion during thorax or abdominal irradiation. Conclusion: We demonstrate the feasibility of tracking the pancreas through the localization of the portal vein using abdominal ultrasound. This will allow for real-time tracking of the intra-fractional motion to justify PTV-margin and to account for unusual motions, thus, improving normal tissue sparing. This research was funding in part by Elekta Inc.« less
Music in the moment? Revisiting the effect of large scale structures.
Lalitte, P; Bigand, E
2006-12-01
The psychological relevance of large-scale musical structures has been a matter of debate in the music community. This issue was investigated with a method that allows assessing listeners' detection of musical incoherencies in normal and scrambled versions of popular and contemporary music pieces. Musical excerpts were segmented into 28 or 29 chunks. In the scrambled version, the temporal order of these chunks was altered with the constraint that the transitions between two chunks never created local acoustical and musical disruptions. Participants were required (1) to detect on-line incoherent linking of chunks, (2) to rate aesthetic quality of pieces, and (3) to evaluate their overall coherence. The findings indicate a moderate sensitivity to large-scale musical structures for popular and contemporary music in both musically trained and untrained listeners. These data are discussed in light of current models of music cognition.
ADS: A FORTRAN program for automated design synthesis: Version 1.10
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1985-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.
Paradigms, Exemplars and Social Change
ERIC Educational Resources Information Center
Lawson, Hal A.
2009-01-01
Researchers' social-cultural organization influences the scope, quality, quantity, coherence, dissemination, utilization and impact of research-based, theoretically sound knowledge. Five concepts--paradigm, exemplar, segment, network and gatekeeper--are salient to research on researchers' organization. Autobiographical reflections signal these…
Robotic NDE inspection of advanced solid rocket motor casings
NASA Technical Reports Server (NTRS)
Mcneelege, Glenn E.; Sarantos, Chris
1994-01-01
The Advanced Solid Rocket Motor program determined the need to inspect ASRM forgings and segments for potentially catastrophic defects. To minimize costs, an automated eddy current inspection system was designed and manufactured for inspection of ASRM forgings in the initial phases of production. This system utilizes custom manipulators and motion control algorithms and integrated six channel eddy current data acquisition and analysis hardware and software. Total system integration is through a personal computer based workcell controller. Segment inspection demands the use of a gantry robot for the EMAT/ET inspection system. The EMAT/ET system utilized similar mechanical compliancy and software logic to accommodate complex part geometries. EMAT provides volumetric inspection capability while eddy current is limited to surface and near surface inspection. Each aspect of the systems are applicable to other industries, such as, inspection of pressure vessels, weld inspection, and traditional ultrasonic inspection applications.
Distributed Wind Market Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forsyth, T.; Baring-Gould, I.
2007-11-01
Distributed wind energy systems provide clean, renewable power for on-site use and help relieve pressure on the power grid while providing jobs and contributing to energy security for homes, farms, schools, factories, private and public facilities, distribution utilities, and remote locations. America pioneered small wind technology in the 1920s, and it is the only renewable energy industry segment that the United States still dominates in technology, manufacturing, and world market share. The series of analyses covered by this report were conducted to assess some of the most likely ways that advanced wind turbines could be utilized apart from large, centralmore » station power systems. Each chapter represents a final report on specific market segments written by leading experts in this field. As such, this document does not speak with one voice but rather a compendium of different perspectives, which are documented from a variety of people in the U.S. distributed wind field.« less
Erosive Burning Study Utilizing Ultrasonic Measurement Techniques
NASA Technical Reports Server (NTRS)
Furfaro, James A.
2003-01-01
A 6-segment subscale motor was developed to generate a range of internal environments from which multiple propellants could be characterized for erosive burning. The motor test bed was designed to provide a high Mach number, high mass flux environment. Propellant regression rates were monitored for each segment utilizing ultrasonic measurement techniques. These data were obtained for three propellants RSRM, ETM- 03, and Castor@ IVA, which span two propellant types, PBAN (polybutadiene acrylonitrile) and HTPB (hydroxyl terminated polybutadiene). The characterization of these propellants indicates a remarkably similar erosive burning response to the induced flow environment. Propellant burnrates for each type had a conventional response with respect to pressure up to a bulk flow velocity threshold. Each propellant, however, had a unique threshold at which it would experience an increase in observed propellant burn rate. Above the observed threshold each propellant again demonstrated a similar enhanced burn rate response corresponding to the local flow environment.
Customer and service profitability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballaban, M.; Kelly, K.; Wisniewski, L.
1996-03-01
The rapid pace of competitive change in the generation sector has pushed electric utilities to rethink the concept of being obligated to serve all customers and with this change, the notion of measuring customer profitability is also being redefined. Traditionally, uniform services were provided to all customers. Rates were based on each customer classes` contribution to average costs, and consequently return was equally allocated across all customer segments. Profitability was defined strictly on an aggregate basis. The increasing demand for choice by electric customers will require electricity providers to redefine if not who they serve, than certainly how they providemore » differentiated services tailored to specific customer segments. Utilities are beginning to analyze the value, or profitability, of offering these services. Aggregate data no longer provides an accurate assessment of how resources should be allocated most efficiently. As services are unbundled, so too must costs be disaggregated to effectively measure the profitability of various options.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werner, N.E.; Van Matre, S.W.
1985-05-01
This manual describes the CRI Subroutine Library and Utility Package. The CRI library provides Cray multitasking functionality on the four-processor shared memory VAX 11/780-4. Additional functionality has been added for more flexibility. A discussion of the library, utilities, error messages, and example programs is provided.
ERIC Educational Resources Information Center
Chiang, Yu-Tzu; Yeh, Yu-Chen; Lin, Sunny S. J.; Hwang, Fang-Ming
2011-01-01
This study examined structure and predictive utility of the 2 x 2 achievement goal model among Taiwan pre-university school students (ages 10 to 16) who learned Chinese language arts. The confirmatory factor analyses of Achievement Goal Questionnaire-Chinese version provided good fitting between the factorial and dimensional structures with the…
ERIC Educational Resources Information Center
Cauffman, Elizabeth; Kimonis, Eva R.; Dmitrieva, Julia; Monahan, Kathryn C.
2009-01-01
The current study compares 3 distinct approaches for measuring juvenile psychopathy and their utility for predicting short- and long-term recidivism among a sample of 1,170 serious male juvenile offenders. The assessment approaches compared a clinical interview method (the Psychopathy Checklist: Youth Version [PCL:YV]; Forth, Kosson, & Hare,…
Distributed File System Utilities to Manage Large DatasetsVersion 0.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-05-21
FileUtils provides a suite of tools to manage large datasets typically created by large parallel MPI applications. They are written in C and use standard POSIX I/Ocalls. The current suite consists of tools to copy, compare, remove, and list. The tools provide dramatic speedup over existing Linux tools, which often run as a single process.
Spatially adapted augmentation of age-specific atlas-based segmentation using patch-based priors
NASA Astrophysics Data System (ADS)
Liu, Mengyuan; Seshamani, Sharmishtaa; Harrylock, Lisa; Kitsch, Averi; Miller, Steven; Chau, Van; Poskitt, Kenneth; Rousseau, Francois; Studholme, Colin
2014-03-01
One of the most common approaches to MRI brain tissue segmentation is to employ an atlas prior to initialize an Expectation- Maximization (EM) image labeling scheme using a statistical model of MRI intensities. This prior is commonly derived from a set of manually segmented training data from the population of interest. However, in cases where subject anatomy varies significantly from the prior anatomical average model (for example in the case where extreme developmental abnormalities or brain injuries occur), the prior tissue map does not provide adequate information about the observed MRI intensities to ensure the EM algorithm converges to an anatomically accurate labeling of the MRI. In this paper, we present a novel approach for automatic segmentation of such cases. This approach augments the atlas-based EM segmentation by exploring methods to build a hybrid tissue segmentation scheme that seeks to learn where an atlas prior fails (due to inadequate representation of anatomical variation in the statistical atlas) and utilize an alternative prior derived from a patch driven search of the atlas data. We describe a framework for incorporating this patch-based augmentation of EM (PBAEM) into a 4D age-specific atlas-based segmentation of developing brain anatomy. The proposed approach was evaluated on a set of MRI brain scans of premature neonates with ages ranging from 27.29 to 46.43 gestational weeks (GWs). Results indicated superior performance compared to the conventional atlas-based segmentation method, providing improved segmentation accuracy for gray matter, white matter, ventricles and sulcal CSF regions.
Lüddemann, Tobias; Egger, Jan
2016-01-01
Abstract. Among all types of cancer, gynecological malignancies belong to the fourth most frequent type of cancer among women. In addition to chemotherapy and external beam radiation, brachytherapy is the standard procedure for the treatment of these malignancies. In the progress of treatment planning, localization of the tumor as the target volume and adjacent organs of risks by segmentation is crucial to accomplish an optimal radiation distribution to the tumor while simultaneously preserving healthy tissue. Segmentation is performed manually and represents a time-consuming task in clinical daily routine. This study focuses on the segmentation of the rectum/sigmoid colon as an organ-at-risk in gynecological brachytherapy. The proposed segmentation method uses an interactive, graph-based segmentation scheme with a user-defined template. The scheme creates a directed two-dimensional graph, followed by the minimal cost closed set computation on the graph, resulting in an outlining of the rectum. The graph’s outline is dynamically adapted to the last calculated cut. Evaluation was performed by comparing manual segmentations of the rectum/sigmoid colon to results achieved with the proposed method. The comparison of the algorithmic to manual result yielded a dice similarity coefficient value of 83.85±4.08, in comparison to 83.97±8.08% for the comparison of two manual segmentations by the same physician. Utilizing the proposed methodology resulted in a median time of 128 s/dataset, compared to 300 s needed for pure manual segmentation. PMID:27403448
MITK-based segmentation of co-registered MRI for subject-related regional anesthesia simulation
NASA Astrophysics Data System (ADS)
Teich, Christian; Liao, Wei; Ullrich, Sebastian; Kuhlen, Torsten; Ntouba, Alexandre; Rossaint, Rolf; Ullisch, Marcus; Deserno, Thomas M.
2008-03-01
With a steadily increasing indication, regional anesthesia is still trained directly on the patient. To develop a virtual reality (VR)-based simulation, a patient model is needed containing several tissues, which have to be extracted from individual magnet resonance imaging (MRI) volume datasets. Due to the given modality and the different characteristics of the single tissues, an adequate segmentation can only be achieved by using a combination of segmentation algorithms. In this paper, we present a framework for creating an individual model from MRI scans of the patient. Our work splits in two parts. At first, an easy-to-use and extensible tool for handling the segmentation task on arbitrary datasets is provided. The key idea is to let the user create a segmentation for the given subject by running different processing steps in a purposive order and store them in a segmentation script for reuse on new datasets. For data handling and visualization, we utilize the Medical Imaging Interaction Toolkit (MITK), which is based on the Visualization Toolkit (VTK) and the Insight Segmentation and Registration Toolkit (ITK). The second part is to find suitable segmentation algorithms and respectively parameters for differentiating the tissues required by the RA simulation. For this purpose, a fuzzy c-means clustering algorithm combined with mathematical morphology operators and a geometric active contour-based approach is chosen. The segmentation process itself aims at operating with minimal user interaction, and the gained model fits the requirements of the simulation. First results are shown for both, male and female MRI of the pelvis.