Sample records for high efficiency segmented

  1. Study on a New Combination Method and High Efficiency Outer Rotor Type Permanent Magnet Motors

    NASA Astrophysics Data System (ADS)

    Enomoto, Yuji; Kitamura, Masashi; Motegi, Yasuaki; Andoh, Takashi; Ochiai, Makoto; Abukawa, Toshimi

    The segment stator core, high space factor coil, and high efficiency magnet are indispensable technologies in the development of compact and a high efficiency motors. But adoption of the segment stator core and high space factor coil has not progressed in the field of outer rotor type motors, for the reason that the inner components cannot be laser welded together. Therefore, we have examined a segment stator core combination technology for the purposes of getting a large increase in efficiency and realizing miniaturization. We have also developed a characteristic estimation method which provides the most suitable performance for segment stator core motors.

  2. Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.

    With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less

  3. Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data

    DOE PAGES

    Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.

    2017-01-01

    With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less

  4. Hierarchical layered and semantic-based image segmentation using ergodicity map

    NASA Astrophysics Data System (ADS)

    Yadegar, Jacob; Liu, Xiaoqing

    2010-04-01

    Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects/regions with contextual topological relationships.

  5. Live minimal path for interactive segmentation of medical images

    NASA Astrophysics Data System (ADS)

    Chartrand, Gabriel; Tang, An; Chav, Ramnada; Cresson, Thierry; Chantrel, Steeve; De Guise, Jacques A.

    2015-03-01

    Medical image segmentation is nowadays required for medical device development and in a growing number of clinical and research applications. Since dedicated automatic segmentation methods are not always available, generic and efficient interactive tools can alleviate the burden of manual segmentation. In this paper we propose an interactive segmentation tool based on image warping and minimal path segmentation that is efficient for a wide variety of segmentation tasks. While the user roughly delineates the desired organs boundary, a narrow band along the cursors path is straightened, providing an ideal subspace for feature aligned filtering and minimal path algorithm. Once the segmentation is performed on the narrow band, the path is warped back onto the original image, precisely delineating the desired structure. This tool was found to have a highly intuitive dynamic behavior. It is especially efficient against misleading edges and required only coarse interaction from the user to achieve good precision. The proposed segmentation method was tested for 10 difficult liver segmentations on CT and MRI images, and the resulting 2D overlap Dice coefficient was 99% on average..

  6. Synthesis and thermoelectric properties of CoP(sub 3)

    NASA Technical Reports Server (NTRS)

    Shields, V. B.; Caillet, T.

    2002-01-01

    In an effort to expand the range of operation for highly efficient, segmented thermoelectric unicouples currently being developed at JPL, skutterudite phosphides are being investigated as potential high temperature segments to supplement antimonide segments that limit the use of these unicouples at a hot-side temperature of about 873-973 K.

  7. New directions for hospital strategic management: the market for efficient care.

    PubMed

    Chilingerian, J A

    1992-01-01

    An analysis of current trends in the health care industry points to buyers seeking high quality, yet efficient, care as an emerging market segment. To target this market segment, hospitals must be prepared to market the efficient physicians. In the coming years, hospitals that can identify and market their best practicing providers will achieve a competitive advantage.

  8. Efficient globally optimal segmentation of cells in fluorescence microscopy images using level sets and convex energy functionals.

    PubMed

    Bergeest, Jan-Philip; Rohr, Karl

    2012-10-01

    In high-throughput applications, accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression and the understanding of cell function. We propose an approach for segmenting cell nuclei which is based on active contours using level sets and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We consider three different well-known energy functionals for active contour-based segmentation and introduce convex formulations of these functionals. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images from different experiments comprising different cell types. We have also performed a quantitative comparison with previous segmentation approaches. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Bar piezoelectric ceramic transformers.

    PubMed

    Erhart, Jiří; Pulpan, Půlpán; Rusin, Luboš

    2013-07-01

    Bar-shaped piezoelectric ceramic transformers (PTs) working in the longitudinal vibration mode (k31 mode) were studied. Two types of the transformer were designed--one with the electrode divided into two segments of different length, and one with the electrodes divided into three symmetrical segments. Parameters of studied transformers such as efficiency, transformation ratio, and input and output impedances were measured. An analytical model was developed for PT parameter calculation for both two- and three-segment PTs. Neither type of bar PT exhibited very high efficiency (maximum 72% for three-segment PT design) at a relatively high transformation ratio (it is 4 for two-segment PT and 2 for three-segment PT at the fundamental resonance mode). The optimum resistive loads were 20 and 10 kΩ for two- and three-segment PT designs for the fundamental resonance, respectively, and about one order of magnitude smaller for the higher overtone (i.e., 2 kΩ and 500 Ω, respectively). The no-load transformation ratio was less than 27 (maximum for two-segment electrode PT design). The optimum input electrode aspect ratios (0.48 for three-segment PT and 0.63 for two-segment PT) were calculated numerically under no-load conditions.

  10. Energy efficient engine pin fin and ceramic composite segmented liner combustor sector rig test report

    NASA Technical Reports Server (NTRS)

    Dubiel, D. J.; Lohmann, R. P.; Tanrikut, S.; Morris, P. M.

    1986-01-01

    Under the NASA-sponsored Energy Efficient Engine program, Pratt and Whitney has successfully completed a comprehensive test program using a 90-degree sector combustor rig that featured an advanced two-stage combustor with a succession of advanced segmented liners. Building on the successful characteristics of the first generation counter-parallel Finwall cooled segmented liner, design features of an improved performance metallic segmented liner were substantiated through representative high pressure and temperature testing in a combustor atmosphere. This second generation liner was substantially lighter and lower in cost than the predecessor configuration. The final test in this series provided an evaluation of ceramic composite liner segments in a representative combustor environment. It was demonstrated that the unique properties of ceramic composites, low density, high fracture toughness, and thermal fatigue resistance can be advantageously exploited in high temperature components. Overall, this Combustor Section Rig Test program has provided a firm basis for the design of advanced combustor liners.

  11. Fast globally optimal segmentation of cells in fluorescence microscopy images.

    PubMed

    Bergeest, Jan-Philip; Rohr, Karl

    2011-01-01

    Accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression in high-throughput screening applications. We propose a new approach for segmenting cell nuclei which is based on active contours and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images of different cell types. We have also performed a quantitative comparison with previous segmentation approaches.

  12. Computational efficient segmentation of cell nuclei in 2D and 3D fluorescent micrographs

    NASA Astrophysics Data System (ADS)

    De Vylder, Jonas; Philips, Wilfried

    2011-02-01

    This paper proposes a new segmentation technique developed for the segmentation of cell nuclei in both 2D and 3D fluorescent micrographs. The proposed method can deal with both blurred edges as with touching nuclei. Using a dual scan line algorithm its both memory as computational efficient, making it interesting for the analysis of images coming from high throughput systems or the analysis of 3D microscopic images. Experiments show good results, i.e. recall of over 0.98.

  13. Space Network Ground Segment Sustainment (SGSS) Project: Developing a COTS-Intensive Ground System

    NASA Technical Reports Server (NTRS)

    Saylor, Richard; Esker, Linda; Herman, Frank; Jacobsohn, Jeremy; Saylor, Rick; Hoffman, Constance

    2013-01-01

    Purpose of the Space Network Ground Segment Sustainment (SGSS) is to implement a new modern ground segment that will enable the NASA Space Network (SN) to deliver high quality services to the SN community for the future The key SGSS Goals: (1) Re-engineer the SN ground segment (2) Enable cost efficiencies in the operability and maintainability of the broader SN.

  14. Nickel-Graphite Composite Compliant Interface and/or Hot Shoe Material

    NASA Technical Reports Server (NTRS)

    Firdosy, Samad A.; Chun-Yip Li, Billy; Ravi, Vilupanur A.; Fleurial, Jean-Pierre; Caillat, Thierry; Anjunyan, Harut

    2013-01-01

    Next-generation high-temperature thermoelectric-power-generating devices will employ segmented architectures and will have to reliably withstand thermally induced mechanical stresses produced during component fabrication, device assembly, and operation. Thermoelectric materials have typically poor mechanical strength, exhibit brittle behavior, and possess a wide range of coefficient of thermal expansion (CTE) values. As a result, the direct bonding at elevated temperatures of these materials to each other to produce segmented leg components is difficult, and often results in localized microcracking at interfaces and mec hanical failure due to the stresses that arise from the CTE mismatch between the various materials. Even in the absence of full mechanical failure, degraded interfaces can lead to increased electrical and thermal resistances, which adversely impact conversion efficiency and power output. The proposed solution is the insertion of a mechanically compliant layer, with high electrical and thermal conductivity, between the low- and high-temperature segments to relieve thermomechanical stresses during device fabrication and operation. This composite material can be used as a stress-relieving layer between the thermoelectric segments and/or between a thermoelectric segment and a hot- or cold-side interconnect material. The material also can be used as a compliant hot shoe. Nickel-coated graphite powders were hot-pressed to form a nickel-graphite composite material. A freestanding thermoelectric segmented leg was fabricated by brazing the compliant pad layer between the high-temperature p- Zintl and low-temperature p-SKD TE segments using Cu-Ag braze foils. The segmented leg stack was heated in vacuum under a compressive load to achieve bonding. The novelty of the innovation is the use of composite material that re duces the thermomechanical stresses en - countered in the construction of high-efficiency, high-temperature therm - o-electric devices. The compliant pad enables the bonding of dissimilar thermoelectric materials while maintaining the desired electrical and thermal properties essential for efficient device operation. The modulus, CTE, electrical, and thermal conductances of the composite can be controlled by varying the ratio of nickel to graphite.

  15. Managerial segmentation of service offerings in work commuting.

    DOT National Transportation Integrated Search

    2015-03-01

    Methodology to efficiently segment markets for public transportation offerings has been introduced and exemplified in an : application to an urban travel corridor in which high tech companies predominate. The principal objective has been to introduce...

  16. Segmented amplifier configurations for laser amplifier

    DOEpatents

    Hagen, Wilhelm F.

    1979-01-01

    An amplifier system for high power lasers, the system comprising a compact array of segments which (1) preserves high, large signal gain with improved pumping efficiency and (2) allows the total amplifier length to be shortened by as much as one order of magnitude. The system uses a three dimensional array of segments, with the plane of each segment being oriented at substantially the amplifier medium Brewster angle relative to the incident laser beam and with one or more linear arrays of flashlamps positioned between adjacent rows of amplifier segments, with the plane of the linear array of flashlamps being substantially parallel to the beam propagation direction.

  17. Efficient terrestrial laser scan segmentation exploiting data structure

    NASA Astrophysics Data System (ADS)

    Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa

    2016-09-01

    New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.

  18. A spatiotemporal-based scheme for efficient registration-based segmentation of thoracic 4-D MRI.

    PubMed

    Yang, Y; Van Reeth, E; Poh, C L; Tan, C H; Tham, I W K

    2014-05-01

    Dynamic three-dimensional (3-D) (four-dimensional, 4-D) magnetic resonance (MR) imaging is gaining importance in the study of pulmonary motion for respiratory diseases and pulmonary tumor motion for radiotherapy. To perform quantitative analysis using 4-D MR images, segmentation of anatomical structures such as the lung and pulmonary tumor is required. Manual segmentation of entire thoracic 4-D MRI data that typically contains many 3-D volumes acquired over several breathing cycles is extremely tedious, time consuming, and suffers high user variability. This requires the development of new automated segmentation schemes for 4-D MRI data segmentation. Registration-based segmentation technique that uses automatic registration methods for segmentation has been shown to be an accurate method to segment structures for 4-D data series. However, directly applying registration-based segmentation to segment 4-D MRI series lacks efficiency. Here we propose an automated 4-D registration-based segmentation scheme that is based on spatiotemporal information for the segmentation of thoracic 4-D MR lung images. The proposed scheme saved up to 95% of computation amount while achieving comparable accurate segmentations compared to directly applying registration-based segmentation to 4-D dataset. The scheme facilitates rapid 3-D/4-D visualization of the lung and tumor motion and potentially the tracking of tumor during radiation delivery.

  19. Fast approximation for joint optimization of segmentation, shape, and location priors, and its application in gallbladder segmentation.

    PubMed

    Saito, Atsushi; Nawano, Shigeru; Shimizu, Akinobu

    2017-05-01

    This paper addresses joint optimization for segmentation and shape priors, including translation, to overcome inter-subject variability in the location of an organ. Because a simple extension of the previous exact optimization method is too computationally complex, we propose a fast approximation for optimization. The effectiveness of the proposed approximation is validated in the context of gallbladder segmentation from a non-contrast computed tomography (CT) volume. After spatial standardization and estimation of the posterior probability of the target organ, simultaneous optimization of the segmentation, shape, and location priors is performed using a branch-and-bound method. Fast approximation is achieved by combining sampling in the eigenshape space to reduce the number of shape priors and an efficient computational technique for evaluating the lower bound. Performance was evaluated using threefold cross-validation of 27 CT volumes. Optimization in terms of translation of the shape prior significantly improved segmentation performance. The proposed method achieved a result of 0.623 on the Jaccard index in gallbladder segmentation, which is comparable to that of state-of-the-art methods. The computational efficiency of the algorithm is confirmed to be good enough to allow execution on a personal computer. Joint optimization of the segmentation, shape, and location priors was proposed, and it proved to be effective in gallbladder segmentation with high computational efficiency.

  20. Open-source software platform for medical image segmentation applications

    NASA Astrophysics Data System (ADS)

    Namías, R.; D'Amato, J. P.; del Fresno, M.

    2017-11-01

    Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.

  1. Using simulated fluorescence cell micrographs for the evaluation of cell image segmentation algorithms.

    PubMed

    Wiesmann, Veit; Bergler, Matthias; Palmisano, Ralf; Prinzen, Martin; Franz, Daniela; Wittenberg, Thomas

    2017-03-18

    Manual assessment and evaluation of fluorescent micrograph cell experiments is time-consuming and tedious. Automated segmentation pipelines can ensure efficient and reproducible evaluation and analysis with constant high quality for all images of an experiment. Such cell segmentation approaches are usually validated and rated in comparison to manually annotated micrographs. Nevertheless, manual annotations are prone to errors and display inter- and intra-observer variability which influence the validation results of automated cell segmentation pipelines. We present a new approach to simulate fluorescent cell micrographs that provides an objective ground truth for the validation of cell segmentation methods. The cell simulation was evaluated twofold: (1) An expert observer study shows that the proposed approach generates realistic fluorescent cell micrograph simulations. (2) An automated segmentation pipeline on the simulated fluorescent cell micrographs reproduces segmentation performances of that pipeline on real fluorescent cell micrographs. The proposed simulation approach produces realistic fluorescent cell micrographs with corresponding ground truth. The simulated data is suited to evaluate image segmentation pipelines more efficiently and reproducibly than it is possible on manually annotated real micrographs.

  2. Hyper-spectral image segmentation using spectral clustering with covariance descriptors

    NASA Astrophysics Data System (ADS)

    Kursun, Olcay; Karabiber, Fethullah; Koc, Cemalettin; Bal, Abdullah

    2009-02-01

    Image segmentation is an important and difficult computer vision problem. Hyper-spectral images pose even more difficulty due to their high-dimensionality. Spectral clustering (SC) is a recently popular clustering/segmentation algorithm. In general, SC lifts the data to a high dimensional space, also known as the kernel trick, then derive eigenvectors in this new space, and finally using these new dimensions partition the data into clusters. We demonstrate that SC works efficiently when combined with covariance descriptors that can be used to assess pixelwise similarities rather than in the high-dimensional Euclidean space. We present the formulations and some preliminary results of the proposed hybrid image segmentation method for hyper-spectral images.

  3. Patterns of care for clinically distinct segments of high cost Medicare beneficiaries.

    PubMed

    Clough, Jeffrey D; Riley, Gerald F; Cohen, Melissa; Hanley, Sheila M; Sanghavi, Darshak; DeWalt, Darren A; Rajkumar, Rahul; Conway, Patrick H

    2016-09-01

    Efforts to improve the efficiency of care for the Medicare population commonly target high cost beneficiaries. We describe and evaluate a novel management approach, population segmentation, for identifying and managing high cost beneficiaries. A retrospective cross-sectional analysis of 6,919,439 Medicare fee-for-service beneficiaries in 2012. We defined and characterized eight distinct clinical population segments, and assessed heterogeneity in managing practitioners. The eight segments comprised 9.8% of the population and 47.6% of annual Medicare payments. The eight segments included 61% and 69% of the population in the top decile and top 5% of annual Medicare payments. The positive-predictive values within each segment for meeting thresholds of Medicare payments ranged from 72% to 100%, 30% to 83%, and 14% to 56% for the upper quartile, upper decile, and upper 5% of Medicare payments respectively. Sensitivity and positive-predictive values were substantially improved over predictive algorithms based on historical utilization patterns and comorbidities. The mean [95% confidence interval] number of unique practitioners and practices delivering E&M services ranged from 1.82 [1.79-1.84] to 6.94 [6.91-6.98] and 1.48 [1.46-1.50] to 4.98 [4.95-5.00] respectively. The percentage of cognitive services delivered by primary care practitioners ranged from 23.8% to 67.9% across segments, with significant variability among specialty types. Most high cost Medicare beneficiaries can be identified based on a single clinical reason and are managed by different practitioners. Population segmentation holds potential to improve efficiency in the Medicare population by identifying opportunities to improve care for specific populations and managing clinicians, and forecasting and evaluating the impact of specific interventions. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Influenza virus reassortment occurs with high frequency in the absence of segment mismatch.

    PubMed

    Marshall, Nicolle; Priyamvada, Lalita; Ende, Zachary; Steel, John; Lowen, Anice C

    2013-01-01

    Reassortment is fundamental to the evolution of influenza viruses and plays a key role in the generation of epidemiologically significant strains. Previous studies indicate that reassortment is restricted by segment mismatch, arising from functional incompatibilities among components of two viruses. Additional factors that dictate the efficiency of reassortment remain poorly characterized. Thus, it is unclear what conditions are favorable for reassortment and therefore under what circumstances novel influenza A viruses might arise in nature. Herein, we describe a system for studying reassortment in the absence of segment mismatch and exploit this system to determine the baseline efficiency of reassortment and the effects of infection dose and timing. Silent mutations were introduced into A/Panama/2007/99 virus such that high-resolution melt analysis could be used to differentiate all eight segments of the wild-type and the silently mutated variant virus. The use of phenotypically identical parent viruses ensured that all progeny were equally fit, allowing reassortment to be measured without selection bias. Using this system, we found that reassortment occurred efficiently (88.4%) following high multiplicity infection, suggesting the process is not appreciably limited by intracellular compartmentalization. That co-infection is the major determinant of reassortment efficiency in the absence of segment mismatch was confirmed with the observation that the proportion of viruses with reassortant genotypes increased exponentially with the proportion of cells co-infected. The number of reassortants shed from co-infected guinea pigs was likewise dependent on dose. With 10⁶ PFU inocula, 46%-86% of viruses isolated from guinea pigs were reassortants. The introduction of a delay between infections also had a strong impact on reassortment and allowed definition of time windows during which super-infection led to reassortment in culture and in vivo. Overall, our results indicate that reassortment between two like influenza viruses is efficient but also strongly dependent on dose and timing of the infections.

  5. A Way to Select Electrical Sheets of the Segment Stator Core Motors.

    NASA Astrophysics Data System (ADS)

    Enomoto, Yuji; Kitamura, Masashi; Sakai, Toshihiko; Ohara, Kouichiro

    The segment stator core, high density winding coil, high-energy-product permanent magnet are indispensable technologies in the development of a compact and also high efficient motors. The conventional design method for the segment stator core mostly depended on experienced knowledge of selecting a suitable electromagnetic material, far from optimized design. Therefore, we have developed a novel design method in the selection of a suitable electromagnetic material based on the correlation evaluation between the material characteristics and motor performance. It enables the selection of suitable electromagnetic material that will meet the motor specification.

  6. Atlas-based segmentation technique incorporating inter-observer delineation uncertainty for whole breast

    NASA Astrophysics Data System (ADS)

    Bell, L. R.; Dowling, J. A.; Pogson, E. M.; Metcalfe, P.; Holloway, L.

    2017-01-01

    Accurate, efficient auto-segmentation methods are essential for the clinical efficacy of adaptive radiotherapy delivered with highly conformal techniques. Current atlas based auto-segmentation techniques are adequate in this respect, however fail to account for inter-observer variation. An atlas-based segmentation method that incorporates inter-observer variation is proposed. This method is validated for a whole breast radiotherapy cohort containing 28 CT datasets with CTVs delineated by eight observers. To optimise atlas accuracy, the cohort was divided into categories by mean body mass index and laterality, with atlas’ generated for each in a leave-one-out approach. Observer CTVs were merged and thresholded to generate an auto-segmentation model representing both inter-observer and inter-patient differences. For each category, the atlas was registered to the left-out dataset to enable propagation of the auto-segmentation from atlas space. Auto-segmentation time was recorded. The segmentation was compared to the gold-standard contour using the dice similarity coefficient (DSC) and mean absolute surface distance (MASD). Comparison with the smallest and largest CTV was also made. This atlas-based auto-segmentation method incorporating inter-observer variation was shown to be efficient (<4min) and accurate for whole breast radiotherapy, with good agreement (DSC>0.7, MASD <9.3mm) between the auto-segmented contours and CTV volumes.

  7. 3-D segmentation of articular cartilages by graph cuts using knee MR images from osteoarthritis initiative

    NASA Astrophysics Data System (ADS)

    Shim, Hackjoon; Lee, Soochan; Kim, Bohyeong; Tao, Cheng; Chang, Samuel; Yun, Il Dong; Lee, Sang Uk; Kwoh, Kent; Bae, Kyongtae

    2008-03-01

    Knee osteoarthritis is the most common debilitating health condition affecting elderly population. MR imaging of the knee is highly sensitive for diagnosis and evaluation of the extent of knee osteoarthritis. Quantitative analysis of the progression of osteoarthritis is commonly based on segmentation and measurement of articular cartilage from knee MR images. Segmentation of the knee articular cartilage, however, is extremely laborious and technically demanding, because the cartilage is of complex geometry and thin and small in size. To improve precision and efficiency of the segmentation of the cartilage, we have applied a semi-automated segmentation method that is based on an s/t graph cut algorithm. The cost function was defined integrating regional and boundary cues. While regional cues can encode any intensity distributions of two regions, "object" (cartilage) and "background" (the rest), boundary cues are based on the intensity differences between neighboring pixels. For three-dimensional (3-D) segmentation, hard constraints are also specified in 3-D way facilitating user interaction. When our proposed semi-automated method was tested on clinical patients' MR images (160 slices, 0.7 mm slice thickness), a considerable amount of segmentation time was saved with improved efficiency, compared to a manual segmentation approach.

  8. Segmented media and medium damping in microwave assisted magnetic recording

    NASA Astrophysics Data System (ADS)

    Bai, Xiaoyu; Zhu, Jian-Gang

    2018-05-01

    In this paper, we present a methodology of segmented media stack design for microwave assisted magnetic recording. Through micro-magnetic modeling, it is demonstrated that an optimized media segmentation is able to yield high signal-to-noise ratio even with limited ac field power. With proper segmentation, the ac field power could be utilized more efficiently and this can alleviate the requirement for medium damping which has been previously considered a critical limitation. The micro-magnetic modeling also shows that with segmentation optimization, recording signal-to-noise ratio can have very little dependence on damping for different recording linear densities.

  9. Recombinant cells that highly express chromosomally-integrated heterologous genes

    DOEpatents

    Ingram, L.O.; Ohta, Kazuyoshi; Wood, B.E.

    1998-10-13

    Recombinant host cells are obtained that comprise (A) a heterologous, polypeptide-encoding polynucleotide segment, stably integrated into a chromosome, which is under transcriptional control of an endogenous promoter and (B) a mutation that effects increased expression of the heterologous segment, resulting in enhanced production by the host cells of each polypeptide encoded by that segment, relative to production of each polypeptide by the host cells in the absence of the mutation. The increased expression thus achieved is retained in the absence of conditions that select for cells displaying such increased expression. When the integrated segment comprises, for example, ethanol-production genes from an efficient ethanol producer like Zymomonas mobilis, recombinant Escherichia coli and other enteric bacterial cells within the present invention are capable of converting a wide range of biomass-derived sugars efficiently to ethanol. 13 figs.

  10. Recombinant cells that highly express chromosomally-integrated heterologous genes

    DOEpatents

    Ingram, Lonnie O.; Ohta, Kazuyoshi; Wood, Brent E.

    1998-01-01

    Recombinant host cells are obtained that comprise (A) a heterologous, polypeptide-encoding polynucleotide segment, stably integrated into a chromosome, which is under transcriptional control of an endogenous promoter and (B) a mutation that effects increased expression of the heterologous segment, resulting in enhanced production by the host cells of each polypeptide encoded by that segment, relative to production of each polypeptide by the host cells in the absence of the mutation. The increased expression thus achieved is retained in the absence of conditions that select for cells displaying such increased expression. When the integrated segment comprises, for example, ethanol-production genes from an efficient ethanol producer like Zymomonas mobilis, recombinant Escherichia coli and other enteric bacterial cells within the present invention are capable of converting a wide range of biomass-derived sugars efficiently to ethanol.

  11. Recombinant cells that highly express chromosomally-integrated heterologous gene

    DOEpatents

    Ingram, Lonnie O.; Ohta, Kazuyoshi; Wood, Brent E.

    2007-03-20

    Recombinant host cells are obtained that comprise (A) a heterologous, polypeptide-encoding polynucleotide segment, stably integrated into a chromosome, which is under transcriptional control of an endogenous promoter and (B) a mutation that effects increased expression of the heterologous segment, resulting in enhanced production by the host cells of each polypeptide encoded by that segment, relative to production of each polypeptide by the host cells in the absence of the mutation. The increased expression thus achieved is retained in the absence of conditions that select for cells displaying such increased expression. When the integrated segment comprises, for example, ethanol-production genes from an efficient ethanol producer like Zymomonas mobilis, recombinant Escherichia coli and other enteric bacterial cells within the present invention are capable of converting a wide range of biomass-derived sugars efficiently to ethanol.

  12. Recombinant cells that highly express chromosomally-integrated heterologous genes

    DOEpatents

    Ingram, Lonnie O.; Ohta, Kazuyoshi; Wood, Brent E.

    2000-08-22

    Recombinant host cells are obtained that comprise (A) a heterologous, polypeptide-encoding polynucleotide segment, stably integrated into a chromosome, which is under transcriptional control of an endogenous promoter and (B) a mutation that effects increased expression of the heterologous segment, resulting in enhanced production by the host cells of each polypeptide encoded by that segment, relative to production of each polypeptide by the host cells in the absence of the mutation. The increased expression thus achieved is retained in the absence of conditions that select for cells displaying such increased expression. When the integrated segment comprises, for example, ethanol-production genes from an efficient ethanol producer like Zymomonas mobilis, recombinant Escherichia coli and other enteric bacterial cells within the present invention are capable of converting a wide range of biomass-derived sugars efficiently to ethanol.

  13. Modelling of segmented high-performance thermoelectric generators with effects of thermal radiation, electrical and thermal contact resistances

    PubMed Central

    Ouyang, Zhongliang; Li, Dawen

    2016-01-01

    In this study, segmented thermoelectric generators (TEGs) have been simulated with various state-of-the-art TE materials spanning a wide temperature range, from 300 K up to 1000 K. The results reveal that by combining the current best p-type TE materials, BiSbTe, MgAgSb, K-doped PbTeS and SnSe with the strongest n-type TE materials, Cu-Doped BiTeSe, AgPbSbTe and SiGe to build segmented legs, TE modules could achieve efficiencies of up to 17.0% and 20.9% at ΔT = 500 K and ΔT = 700 K, respectively, and a high output power densities of over 2.1 Watt cm−2 at the temperature difference of 700 K. Moreover, we demonstrate that successful segmentation requires a smooth change of compatibility factor s from one end of the TEG leg to the other, even if s values of two ends differ by more than a factor of 2. The influence of the thermal radiation, electrical and thermal contact effects have also been studied. Although considered potentially detrimental to the TEG performance, these effects, if well-regulated, do not prevent segmentation of the current best TE materials from being a prospective way to construct high performance TEGs with greatly enhanced efficiency and output power density. PMID:27052592

  14. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets.

    PubMed

    Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing

    2017-03-01

    Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.

  15. An efficient global energy optimization approach for robust 3D plane segmentation of point clouds

    NASA Astrophysics Data System (ADS)

    Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian

    2018-03-01

    Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)

  16. Nucleus detection using gradient orientation information and linear least squares regression

    NASA Astrophysics Data System (ADS)

    Kwak, Jin Tae; Hewitt, Stephen M.; Xu, Sheng; Pinto, Peter A.; Wood, Bradford J.

    2015-03-01

    Computerized histopathology image analysis enables an objective, efficient, and quantitative assessment of digitized histopathology images. Such analysis often requires an accurate and efficient detection and segmentation of histological structures such as glands, cells and nuclei. The segmentation is used to characterize tissue specimens and to determine the disease status or outcomes. The segmentation of nuclei, in particular, is challenging due to the overlapping or clumped nuclei. Here, we propose a nuclei seed detection method for the individual and overlapping nuclei that utilizes the gradient orientation or direction information. The initial nuclei segmentation is provided by a multiview boosting approach. The angle of the gradient orientation is computed and traced for the nuclear boundaries. Taking the first derivative of the angle of the gradient orientation, high concavity points (junctions) are discovered. False junctions are found and removed by adopting a greedy search scheme with the goodness-of-fit statistic in a linear least squares sense. Then, the junctions determine boundary segments. Partial boundary segments belonging to the same nucleus are identified and combined by examining the overlapping area between them. Using the final set of the boundary segments, we generate the list of seeds in tissue images. The method achieved an overall precision of 0.89 and a recall of 0.88 in comparison to the manual segmentation.

  17. Unipolar Barrier Dual-Band Infrared Detectors

    NASA Technical Reports Server (NTRS)

    Ting, David Z. (Inventor); Soibel, Alexander (Inventor); Khoshakhlagh, Arezou (Inventor); Gunapala, Sarath (Inventor)

    2017-01-01

    Dual-band barrier infrared detectors having structures configured to reduce spectral crosstalk between spectral bands and/or enhance quantum efficiency, and methods of their manufacture are provided. In particular, dual-band device structures are provided for constructing high-performance barrier infrared detectors having reduced crosstalk and/or enhance quantum efficiency using novel multi-segmented absorber regions. The novel absorber regions may comprise both p-type and n-type absorber sections. Utilizing such multi-segmented absorbers it is possible to construct any suitable barrier infrared detector having reduced crosstalk, including npBPN, nBPN, pBPN, npBN, npBP, pBN and nBP structures. The pBPN and pBN detector structures have high quantum efficiency and suppresses dark current, but has a smaller etch depth than conventional detectors and does not require a thick bottom contact layer.

  18. An efficient and high fidelity method for amplification, cloning and sequencing of complete tospovirus genomic RNA segments

    USDA-ARS?s Scientific Manuscript database

    Amplification and sequencing of the complete M- and S-RNA segments of Tomato spotted wilt virus and Impatiens necrotic spot virus as a single fragment is useful for whole genome sequencing of tospoviruses co-infecting a single host plant. It avoids issues associated with overlapping amplicon-based ...

  19. SU-E-T-356: Efficient Segmentation of Flattening Filter Free Photon Beamsfor 3D-Conformal SBRT Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbiere, J; Beninati, G; Ndlovu, A

    2015-06-15

    Purpose: It has been argued that a 3D-conformal technique (3DCRT) is suitable for SBRT due to its simplicity for non-coplanar planning and delivery. It has also been hypothesized that a high dose delivered in a short time can enhance indirect cell death due to vascular damage as well as limiting intrafraction motion. Flattening Filter Free (FFF) photon beams are ideal for high dose rate treatment but their conical profiles are not ideal for 3DCRT. The purpose of our work is to present a method to efficiently segment an FFF beam for standard 3DCRT planning. Methods: A 10×10 cm Varian Truemore » Beam 6X FFF beam profile was analyzed using segmentation theory to determine the optimum segmentation intensity required to create an 8 cm uniform dose profile. Two segments were automatically created in sequence with a Varian Eclipse treatment planning system by converting isodoses corresponding to the calculated segmentation intensity to contours and applying the “fit and shield” tool. All segments were then added to the FFF beam to create a single merged field. Field blocking can be incorporated but was not used for clarity. Results: Calculation of the segmentation intensity using an algorithm originally proposed by Xia and Verhey indicated that each segment should extend to the 92% isodose. The original FFF beam with 100% at the isocenter at a depth of 10 cm was reduced to 80% at 4cm from the isocenter; the segmented beam had +/−2.5 % uniformity up to 4.4cm from the isocenter. An additional benefit of our method is a 50% decrease in the 80%-20% penumbra of 0.6cm compared to 1.2cm in the original FFF beam. Conclusion: Creation of two optimum segments can flatten a FFF beam and also reduce its penumbra for clinical 3DCRT SBRT treatment.« less

  20. Reflection type metasurface designed for high efficiency vectorial field generation

    NASA Astrophysics Data System (ADS)

    Wang, Shiyi; Zhan, Qiwen

    2016-07-01

    We propose a reflection type metal-insulator-metal (MIM) metasurface composed of hybrid nano-antennas for comprehensive spatial engineering of the properties of optical fields. The capability of such structure is illustrated in the design of a device that can be used to produce a radially polarized vectorial beam for optical needle field generation. This device consists of uniformly segmented sectors of high efficiency MIM metasurface. With each of the segment sector functioning as a local quarter-wave-plate (QWP), the device is designed to convert circularly polarized incidence into local linear polarization to create an overall radial polarization with corresponding binary phases and extremely high dynamic range amplitude modulation. The capability of such devices enables the generation of nearly arbitrarily complex optical fields that may find broad applications that transcend disciplinary boundaries.

  1. Ultrasound-propelled nanoporous gold wire for efficient drug loading and release.

    PubMed

    Garcia-Gradilla, Victor; Sattayasamitsathit, Sirilak; Soto, Fernando; Kuralay, Filiz; Yardımcı, Ceren; Wiitala, Devan; Galarnyk, Michael; Wang, Joseph

    2014-10-29

    Ultrasound (US)-powered nanowire motors based on nanoporous gold segment are developed for increasing the drug loading capacity. The new highly porous nanomotors are characterized with a tunable pore size, high surface area, and high capacity for the drug payload. These nanowire motors are prepared by template membrane deposition of a silver-gold alloy segment followed by dealloying the silver component. The drug doxorubicin (DOX) is loaded within the nanopores via electrostatic interactions with an anionic polymeric coating. The nanoporous gold structure also facilitates the near-infrared (NIR) light controlled release of the drug through photothermal effects. Ultrasound-driven transport of the loaded drug toward cancer cells followed by NIR-light triggered release is illustrated. The incorporation of the nanoporous gold segment leads to a nearly 20-fold increase in the active surface area compared to common gold nanowire motors. It is envisioned that such US-powered nanomotors could provide a new approach to rapidly and efficiently deliver large therapeutic payloads in a target-specific manner. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Direct volume estimation without segmentation

    NASA Astrophysics Data System (ADS)

    Zhen, X.; Wang, Z.; Islam, A.; Bhaduri, M.; Chan, I.; Li, S.

    2015-03-01

    Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.

  3. An image segmentation method based on fuzzy C-means clustering and Cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Mingwei; Wan, Youchuan; Gao, Xianjun; Ye, Zhiwei; Chen, Maolin

    2018-04-01

    Image segmentation is a significant step in image analysis and machine vision. Many approaches have been presented in this topic; among them, fuzzy C-means (FCM) clustering is one of the most widely used methods for its high efficiency and ambiguity of images. However, the success of FCM could not be guaranteed because it easily traps into local optimal solution. Cuckoo search (CS) is a novel evolutionary algorithm, which has been tested on some optimization problems and proved to be high-efficiency. Therefore, a new segmentation technique using FCM and blending of CS algorithm is put forward in the paper. Further, the proposed method has been measured on several images and compared with other existing FCM techniques such as genetic algorithm (GA) based FCM and particle swarm optimization (PSO) based FCM in terms of fitness value. Experimental results indicate that the proposed method is robust, adaptive and exhibits the better performance than other methods involved in the paper.

  4. NASA's mobile satellite communications program; ground and space segment technologies

    NASA Technical Reports Server (NTRS)

    Naderi, F.; Weber, W. J.; Knouse, G. H.

    1984-01-01

    This paper describes the Mobile Satellite Communications Program of the United States National Aeronautics and Space Administration (NASA). The program's objectives are to facilitate the deployment of the first generation commercial mobile satellite by the private sector, and to technologically enable future generations by developing advanced and high risk ground and space segment technologies. These technologies are aimed at mitigating severe shortages of spectrum, orbital slot, and spacecraft EIRP which are expected to plague the high capacity mobile satellite systems of the future. After a brief introduction of the concept of mobile satellite systems and their expected evolution, this paper outlines the critical ground and space segment technologies. Next, the Mobile Satellite Experiment (MSAT-X) is described. MSAT-X is the framework through which NASA will develop advanced ground segment technologies. An approach is outlined for the development of conformal vehicle antennas, spectrum and power-efficient speech codecs, and modulation techniques for use in the non-linear faded channels and efficient multiple access schemes. Finally, the paper concludes with a description of the current and planned NASA activities aimed at developing complex large multibeam spacecraft antennas needed for future generation mobile satellite systems.

  5. GPU accelerated fuzzy connected image segmentation by using CUDA.

    PubMed

    Zhuge, Ying; Cao, Yong; Miller, Robert W

    2009-01-01

    Image segmentation techniques using fuzzy connectedness principles have shown their effectiveness in segmenting a variety of objects in several large applications in recent years. However, one problem of these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays commodity graphics hardware provides high parallel computing power. In this paper, we present a parallel fuzzy connected image segmentation algorithm on Nvidia's Compute Unified Device Architecture (CUDA) platform for segmenting large medical image data sets. Our experiments based on three data sets with small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 7.2x, 7.3x, and 14.4x, correspondingly, for the three data sets over the sequential implementation of fuzzy connected image segmentation algorithm on CPU.

  6. Follow-up of coronary artery bypass graft patency: diagnostic efficiency of high-pitch dual-source 256-slice MDCT findings.

    PubMed

    Yuceler, Zeyneb; Kantarci, Mecit; Yuce, Ihsan; Kizrak, Yesim; Bayraktutan, Ummugulsum; Ogul, Hayri; Kiris, Adem; Celik, Omer; Pirimoglu, Berhan; Genc, Berhan; Gundogdu, Fuat

    2014-01-01

    Our aim was to evaluate the diagnostic accuracy of 256-slice, high-pitch mode multidetector computed tomography (MDCT) for coronary artery bypass graft (CABG) patency. Eighty-eight patients underwent 256-slice MDCT angiography to evaluate their graft patency after CABG surgery using a prospectively synchronized electrocardiogram in the high-pitch spiral acquisition mode. Effective radiation doses were calculated. We investigated the diagnostic accuracy of high-pitch, low-dose, prospective, electrocardiogram-triggering, dual-source MDCT for CABG patency compared with catheter coronary angiography imaging findings. A total of 215 grafts and 645 vessel segments were analyzed. All graft segments had diagnostic image quality. The proximal and middle graft segments had significantly (P < 0.05) better mean image quality scores (1.18 ± 0.4) than the distal segments (1.31 ± 0.5). Using catheter coronary angiography as the reference standard, high-pitch MDCT had the following sensitivity, specificity, positive predictive value, and negative predictive value of per-segment analysis for detecting graft patency: 97.1%, 99.6%, 94.4%, and 99.8%, respectively. In conclusion, MDCT can be used noninvasively with a lower radiation dose for the assessment of restenosis in CABG patients.

  7. Compatibility of segmented thermoelectric generators

    NASA Technical Reports Server (NTRS)

    Snyder, J.; Ursell, T.

    2002-01-01

    It is well known that power generation efficiency improves when materials with appropriate properties are combined either in a cascaded or segmented fashion across a temperature gradient. Past methods for determining materials used in segmentation weremainly concerned with materials that have the highest figure of merit in the temperature range. However, the example of SiGe segmented with Bi2Te3 and/or various skutterudites shows a marked decline in device efficiency even though SiGe has the highest figure of merit in the temperature range. The origin of the incompatibility of SiGe with other thermoelectric materials leads to a general definition of compatibility and intrinsic efficiency. The compatibility factor derived as = (Jl+zr - 1) a is a function of only intrinsic material properties and temperature, which is represented by a ratio of current to conduction heat. For maximum efficiency the compatibility factor should not change with temperature both within a single material, and in the segmented leg as a whole. This leads to a measure of compatibility not only between segments, but also within a segment. General temperature trends show that materials are more self compatible at higher temperatures, and segmentation is more difficult across a larger -T. The compatibility factor can be used as a quantitative guide for deciding whether a material is better suited for segmentation orcascading. Analysis of compatibility factors and intrinsic efficiency for optimal segmentation are discussed, with intent to predict optimal material properties, temperature interfaces, and/or currentheat ratios.

  8. Crew Survivable Helicopter Undercarriage.

    DTIC Science & Technology

    1984-01-01

    used to refine the high -rate test specimens and were compared to other literature data on a specific load per length and energy per inch of perimeter...marked improvement in energy efficiency was observed with no joint failures. Seven of the eight segments shipped to NASA for high - rate testing were...provides the weight and performance criteria used to evaluate the energy absorbing efficiency of the rotated sine wave concept. Next, the low-rate and high

  9. Selective suppression of high-order harmonics within phase-matched spectral regions.

    PubMed

    Lerner, Gavriel; Diskin, Tzvi; Neufeld, Ofer; Kfir, Ofer; Cohen, Oren

    2017-04-01

    Phase matching in high-harmonic generation leads to enhancement of multiple harmonics. It is sometimes desired to control the spectral structure within the phase-matched spectral region. We propose a scheme for selective suppression of high-order harmonics within the phase-matched spectral region while weakly influencing the other harmonics. The method is based on addition of phase-mismatched segments within a phase-matched medium. We demonstrate the method numerically in two examples. First, we show that one phase-mismatched segment can significantly suppress harmonic orders 9, 15, and 21. Second, we show that two phase-mismatched segments can efficiently suppress circularly polarized harmonics with one helicity over the other when driven by a bi-circular field. The new method may be useful for various applications, including the generation of highly helical bright attosecond pulses.

  10. Spatial domain entertainment audio decompression/compression

    NASA Astrophysics Data System (ADS)

    Chan, Y. K.; Tam, Ka Him K.

    2014-02-01

    The ARM7 NEON processor with 128bit SIMD hardware accelerator requires a peak performance of 13.99 Mega Cycles per Second for MP3 stereo entertainment quality decoding. For similar compression bit rate, OGG and AAC is preferred over MP3. The Patent Cooperation Treaty Application dated 28/August/2012 describes an audio decompression scheme producing a sequence of interleaving "min to Max" and "Max to min" rising and falling segments. The number of interior audio samples bound by "min to Max" or "Max to min" can be {0|1|…|N} audio samples. The magnitudes of samples, including the bounding min and Max, are distributed as normalized constants within the 0 and 1 of the bounding magnitudes. The decompressed audio is then a "sequence of static segments" on a frame by frame basis. Some of these frames needed to be post processed to elevate high frequency. The post processing is compression efficiency neutral and the additional decoding complexity is only a small fraction of the overall decoding complexity without the need of extra hardware. Compression efficiency can be speculated as very high as source audio had been decimated and converted to a set of data with only "segment length and corresponding segment magnitude" attributes. The PCT describes how these two attributes are efficiently coded by the PCT innovative coding scheme. The PCT decoding efficiency is obviously very high and decoding latency is basically zero. Both hardware requirement and run time is at least an order of magnitude better than MP3 variants. The side benefit is ultra low power consumption on mobile device. The acid test on how such a simplistic waveform representation can indeed reproduce authentic decompressed quality is benchmarked versus OGG(aoTuv Beta 6.03) by three pair of stereo audio frames and one broadcast like voice audio frame with each frame consisting 2,028 samples at 44,100KHz sampling frequency.

  11. A fast and efficient segmentation scheme for cell microscopic image.

    PubMed

    Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H

    2007-04-27

    Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.

  12. Remote sensing image segmentation using local sparse structure constrained latent low rank representation

    NASA Astrophysics Data System (ADS)

    Tian, Shu; Zhang, Ye; Yan, Yimin; Su, Nan; Zhang, Junping

    2016-09-01

    Latent low-rank representation (LatLRR) has been attached considerable attention in the field of remote sensing image segmentation, due to its effectiveness in exploring the multiple subspace structures of data. However, the increasingly heterogeneous texture information in the high spatial resolution remote sensing images, leads to more severe interference of pixels in local neighborhood, and the LatLRR fails to capture the local complex structure information. Therefore, we present a local sparse structure constrainted latent low-rank representation (LSSLatLRR) segmentation method, which explicitly imposes the local sparse structure constraint on LatLRR to capture the intrinsic local structure in manifold structure feature subspaces. The whole segmentation framework can be viewed as two stages in cascade. In the first stage, we use the local histogram transform to extract the texture local histogram features (LHOG) at each pixel, which can efficiently capture the complex and micro-texture pattern. In the second stage, a local sparse structure (LSS) formulation is established on LHOG, which aims to preserve the local intrinsic structure and enhance the relationship between pixels having similar local characteristics. Meanwhile, by integrating the LSS and the LatLRR, we can efficiently capture the local sparse and low-rank structure in the mixture of feature subspace, and we adopt the subspace segmentation method to improve the segmentation accuracy. Experimental results on the remote sensing images with different spatial resolution show that, compared with three state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.

  13. Countering Beam Divergence Effects with Focused Segmented Scintillators for High DQE Megavoltage Active Matrix Imagers

    PubMed Central

    Liu, Langechuan; Antonuk, Larry E; Zhao, Qihua; El-Mohri, Youcef; Jiang, Hao

    2012-01-01

    The imaging performance of active matrix flat-panel imagers designed for megavoltage imaging (MV AMFPIs) is severely constrained by relatively low x-ray detection efficiency, which leads to a detective quantum efficiency (DQE) of only ~1%. Previous theoretical and empirical studies by our group have demonstrated the potential for addressing this constraint through utilization of thick, two-dimensional, segmented scintillators with optically isolated crystals. However, this strategy is constrained by degradation of high-frequency DQE resulting from spatial resolution loss at locations away from the central beam axis due to oblique incidence of radiation. To address this challenge, segmented scintillators constructed so that the crystals are individually focused toward the radiation source are proposed and theoretically investigated. The study was performed using Monte Carlo simulations of radiation transport to examine the modulation transfer function and DQE of focused segmented scintillators with thicknesses ranging from 5 to 60 mm. The results demonstrate that, independent of scintillator thickness, the introduction of focusing largely restores spatial resolution and DQE performance otherwise lost in thick, unfocused segmented scintillators. For the case of a 60 mm thick BGO scintillator and at a location 20 cm off the central beam axis, use of focusing improves DQE by up to a factor of ~130 at non-zero spatial frequencies. The results also indicate relatively robust tolerance of such scintillators to positional displacements, of up to 10 cm in the source-to-detector direction and 2 cm in the lateral direction, from their optimal focusing position, which could potentially enhance practical clinical use of focused segmented scintillators in MV AMFPIs. PMID:22854009

  14. Small intestine histomorphometry of beef cattle with divergent feed efficiency

    PubMed Central

    2013-01-01

    Background The provision of feed is a major cost in beef production. Therefore, the improvement of feed efficiency is warranted. The direct assessment of feed efficiency has limitations and alternatives are needed. Small intestine micro-architecture is associated with function and may be related to feed efficiency. The objective was to verify the potential histomorphological differences in the small intestine of animals with divergent feed efficiency. Methods From a population of 45 feedlot steers, 12 were selected with low-RFI (superior feed efficiency) and 12 with high-RFI (inferior feed efficiency) at the end of the finishing period. The animals were processed at 13.79 ± 1.21 months of age. Within 1.5 h of slaughter the gastrointestinal tract was collected and segments from duodenum and ileum were harvested. Tissue fragments were processed, sectioned and stained with hematoxylin and eosin. Photomicroscopy images were taken under 1000x magnification. For each animal 100 intestinal crypts were imaged, in a cross section view, from each of the two intestinal segments. Images were analyzed using the software ImageJ®. The measurements taken were: crypt area, crypt perimeter, crypt lumen area, nuclei number and the cell size was indirectly calculated. Data were analyzed using general linear model and correlation procedures of SAS®. Results Efficient beef steers (low-RFI) have a greater cellularity (indicated by nuclei number) in the small intestinal crypts, both in duodenum and ileum, than less efficient beef steers (high-RFI) (P < 0.05). The mean values for the nuclei number of the low-RFI and high-RFI groups were 33.16 and 30.30 in the duodenum and 37.21 and 33.65 in the ileum, respectively. The average size of the cells did not differ between feed efficiency groups in both segments (P ≥ 0.10). A trend was observed (P ≤ 0.10) for greater crypt area and crypt perimeter in the ileum for cattle with improved feed efficiency. Conclusion Improved feed efficiency is associated with greater cellularity and no differences on average cell size in the crypts of the small intestine in the bovine. These observations are likely to lead to an increase in the energy demand by the small intestine regardless of the more desirable feed efficiency. PMID:23379622

  15. Efficient coupling of starlight into single mode photonics using Adaptive Injection (AI)

    NASA Astrophysics Data System (ADS)

    Norris, Barnaby; Cvetojevic, Nick; Gross, Simon; Arriola, Alexander; Tuthill, Peter; Lawrence, Jon; Richards, Samuel; Goodwin, Michael; Zheng, Jessica

    2016-08-01

    Using single-mode fibres in astronomy enables revolutionary techniques including single-mode interferometry and spectroscopy. However, injection of seeing-limited starlight into single mode photonics is extremely difficult. One solution is Adaptive Injection (AI). The telescope pupil is segmented into a number of smaller subapertures each with size r0, such that seeing can be approximated as a single tip / tilt / piston term for each subaperture, and then injected into a separate fibre via a facet of a segmented MEMS deformable mirror. The injection problem is then reduced to a set of individual tip tilt loops, resulting in high overall coupling efficiency.

  16. Patch forest: a hybrid framework of random forest and patch-based segmentation

    NASA Astrophysics Data System (ADS)

    Xie, Zhongliu; Gillies, Duncan

    2016-03-01

    The development of an accurate, robust and fast segmentation algorithm has long been a research focus in medical computer vision. State-of-the-art practices often involve non-rigidly registering a target image with a set of training atlases for label propagation over the target space to perform segmentation, a.k.a. multi-atlas label propagation (MALP). In recent years, the patch-based segmentation (PBS) framework has gained wide attention due to its advantage of relaxing the strict voxel-to-voxel correspondence to a series of pair-wise patch comparisons for contextual pattern matching. Despite a high accuracy reported in many scenarios, computational efficiency has consistently been a major obstacle for both approaches. Inspired by recent work on random forest, in this paper we propose a patch forest approach, which by equipping the conventional PBS with a fast patch search engine, is able to boost segmentation speed significantly while retaining an equal level of accuracy. In addition, a fast forest training mechanism is also proposed, with the use of a dynamic grid framework to efficiently approximate data compactness computation and a 3D integral image technique for fast box feature retrieval.

  17. Prostate segmentation: an efficient convex optimization approach with axial symmetry using 3-D TRUS and MR images.

    PubMed

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2014-04-01

    We propose a novel global optimization-based approach to segmentation of 3-D prostate transrectal ultrasound (TRUS) and T2 weighted magnetic resonance (MR) images, enforcing inherent axial symmetry of prostate shapes to simultaneously adjust a series of 2-D slice-wise segmentations in a "global" 3-D sense. We show that the introduced challenging combinatorial optimization problem can be solved globally and exactly by means of convex relaxation. In this regard, we propose a novel coherent continuous max-flow model (CCMFM), which derives a new and efficient duality-based algorithm, leading to a GPU-based implementation to achieve high computational speeds. Experiments with 25 3-D TRUS images and 30 3-D T2w MR images from our dataset, and 50 3-D T2w MR images from a public dataset, demonstrate that the proposed approach can segment a 3-D prostate TRUS/MR image within 5-6 s including 4-5 s for initialization, yielding a mean Dice similarity coefficient of 93.2%±2.0% for 3-D TRUS images and 88.5%±3.5% for 3-D MR images. The proposed method also yields relatively low intra- and inter-observer variability introduced by user manual initialization, suggesting a high reproducibility, independent of observers.

  18. A study of reconstruction accuracy for a cardiac SPECT system with multi-segmental collimation

    NASA Astrophysics Data System (ADS)

    Yu, D.-C.; Chang, W.; Pan, T.-S.

    1997-06-01

    To improve the geometric efficiency of cardiac SPECT imaging, the authors previously proposed to use a multi-segmental collimation with a cylindrical geometry. The proposed collimator consists of multiple parallel-hole collimators with most of the segments directed toward a small central region, where the patient's heart should be positioned. This technique provides a significantly increased detection efficiency for the central region, but at the expense of reduced efficiency for the surrounding region. The authors have used computer simulations to evaluate the implication of this technique on the accuracy of the reconstructed cardiac images. Two imaging situations were simulated: 1) the heart well placed inside the central region, and 2) the heart shifted and partially outside the central region. A neighboring high-uptake liver was simulated for both imaging situations. The images were reconstructed and corrected for attenuation with ML-EM and OS-FM methods using a complete attenuation map. The results indicate that errors caused by projection truncation are not significant and are not strongly dependent on the activity of the liver when the heart is well positioned within the central region. When the heart is partially outside the central region, hybrid emission data (a combination of high-count projections from the central region and low-count projections from the background region) can be used to restore the activity of the truncated section of the myocardium. However, the variance of the image in the section of the myocardium outside the central region is increased by 2-3 times when 10% of the collimator segments are used to image the background region.

  19. Pulmonary parenchyma segmentation in thin CT image sequences with spectral clustering and geodesic active contour model based on similarity

    NASA Astrophysics Data System (ADS)

    He, Nana; Zhang, Xiaolong; Zhao, Juanjuan; Zhao, Huilan; Qiang, Yan

    2017-07-01

    While the popular thin layer scanning technology of spiral CT has helped to improve diagnoses of lung diseases, the large volumes of scanning images produced by the technology also dramatically increase the load of physicians in lesion detection. Computer-aided diagnosis techniques like lesions segmentation in thin CT sequences have been developed to address this issue, but it remains a challenge to achieve high segmentation efficiency and accuracy without much involvement of human manual intervention. In this paper, we present our research on automated segmentation of lung parenchyma with an improved geodesic active contour model that is geodesic active contour model based on similarity (GACBS). Combining spectral clustering algorithm based on Nystrom (SCN) with GACBS, this algorithm first extracts key image slices, then uses these slices to generate an initial contour of pulmonary parenchyma of un-segmented slices with an interpolation algorithm, and finally segments lung parenchyma of un-segmented slices. Experimental results show that the segmentation results generated by our method are close to what manual segmentation can produce, with an average volume overlap ratio of 91.48%.

  20. Advanced Radioisotope Power Systems Segmented Thermoelectric Research

    NASA Technical Reports Server (NTRS)

    Caillat, Thierry

    2004-01-01

    Flight times are long; - Need power systems with >15 years life. Mass is at an absolute premium; - Need power systems with high specific power and scalability. 3 orders of magnitude reduction in solar irradiance from Earth to Pluto. Nuclear power sources preferable. The Overall objective is to develop low mass, high efficiency, low-cost Advanced Radioisotope Power System with double the Specific Power and Efficiency over state-of-the-art Radioisotope Thermoelectric Generators (RTGs).

  1. A Broadband High Dynamic Range Digital Receiving System for Electromagnetic Signals

    DTIC Science & Technology

    2010-08-26

    dB. [0014] In Steinbrecher (United States Patent No. 7,250,920), an air interface metasurface is described that efficiently captures incident...broadband electromagnetic energy and provides a method for segmenting the total metasurface capture area into a plurality of smaller capture areas...such that the sum of the capture areas is equal to the total capture area of the metasurface . The segmentation of the electromagnetic capture area is

  2. The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim

    2008-01-01

    Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.

  3. Dual-stage deep learning framework for pigment epithelium detachment segmentation in polypoidal choroidal vasculopathy

    PubMed Central

    Xu, Yupeng; Yan, Ke; Kim, Jinman; Wang, Xiuying; Li, Changyang; Su, Li; Yu, Suqin; Xu, Xun; Feng, Dagan David

    2017-01-01

    Worldwide, polypoidal choroidal vasculopathy (PCV) is a common vision-threatening exudative maculopathy, and pigment epithelium detachment (PED) is an important clinical characteristic. Thus, precise and efficient PED segmentation is necessary for PCV clinical diagnosis and treatment. We propose a dual-stage learning framework via deep neural networks (DNN) for automated PED segmentation in PCV patients to avoid issues associated with manual PED segmentation (subjectivity, manual segmentation errors, and high time consumption).The optical coherence tomography scans of fifty patients were quantitatively evaluated with different algorithms and clinicians. Dual-stage DNN outperformed existing PED segmentation methods for all segmentation accuracy parameters, including true positive volume fraction (85.74 ± 8.69%), dice similarity coefficient (85.69 ± 8.08%), positive predictive value (86.02 ± 8.99%) and false positive volume fraction (0.38 ± 0.18%). Dual-stage DNN achieves accurate PED quantitative information, works with multiple types of PEDs and agrees well with manual delineation, suggesting that it is a potential automated assistant for PCV management. PMID:28966847

  4. Dual-stage deep learning framework for pigment epithelium detachment segmentation in polypoidal choroidal vasculopathy.

    PubMed

    Xu, Yupeng; Yan, Ke; Kim, Jinman; Wang, Xiuying; Li, Changyang; Su, Li; Yu, Suqin; Xu, Xun; Feng, Dagan David

    2017-09-01

    Worldwide, polypoidal choroidal vasculopathy (PCV) is a common vision-threatening exudative maculopathy, and pigment epithelium detachment (PED) is an important clinical characteristic. Thus, precise and efficient PED segmentation is necessary for PCV clinical diagnosis and treatment. We propose a dual-stage learning framework via deep neural networks (DNN) for automated PED segmentation in PCV patients to avoid issues associated with manual PED segmentation (subjectivity, manual segmentation errors, and high time consumption).The optical coherence tomography scans of fifty patients were quantitatively evaluated with different algorithms and clinicians. Dual-stage DNN outperformed existing PED segmentation methods for all segmentation accuracy parameters, including true positive volume fraction (85.74 ± 8.69%), dice similarity coefficient (85.69 ± 8.08%), positive predictive value (86.02 ± 8.99%) and false positive volume fraction (0.38 ± 0.18%). Dual-stage DNN achieves accurate PED quantitative information, works with multiple types of PEDs and agrees well with manual delineation, suggesting that it is a potential automated assistant for PCV management.

  5. Object segmentation using graph cuts and active contours in a pyramidal framework

    NASA Astrophysics Data System (ADS)

    Subudhi, Priyambada; Mukhopadhyay, Susanta

    2018-03-01

    Graph cuts and active contours are two very popular interactive object segmentation techniques in the field of computer vision and image processing. However, both these approaches have their own well-known limitations. Graph cut methods perform efficiently giving global optimal segmentation result for smaller images. However, for larger images, huge graphs need to be constructed which not only takes an unacceptable amount of memory but also increases the time required for segmentation to a great extent. On the other hand, in case of active contours, initial contour selection plays an important role in the accuracy of the segmentation. So a proper selection of initial contour may improve the complexity as well as the accuracy of the result. In this paper, we have tried to combine these two approaches to overcome their above-mentioned drawbacks and develop a fast technique of object segmentation. Here, we have used a pyramidal framework and applied the mincut/maxflow algorithm on the lowest resolution image with the least number of seed points possible which will be very fast due to the smaller size of the image. Then, the obtained segmentation contour is super-sampled and and worked as the initial contour for the next higher resolution image. As the initial contour is very close to the actual contour, so fewer number of iterations will be required for the convergence of the contour. The process is repeated for all the high-resolution images and experimental results show that our approach is faster as well as memory efficient as compare to both graph cut or active contour segmentation alone.

  6. Optimized multisectioned acoustic liners

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.

    1979-01-01

    New calculations show that segmenting is most efficient at high frequencies with relatively long duct lengths where the attenuation is low for both uniform and segmented liners. Statistical considerations indicate little advantage in using optimized liners with more than two segments while the bandwidth of an optimized two-segment liner is shown to be nearly equal to that of a uniform liner. Multielement liner calculations show a large degradation in performance due to changes in assumed input modal structure. Computer programs are used to generate theoretical attenuations for a number of liner configurations for liners in a rectangular duct with no mean flow. Overall, the use of optimized multisectioned liners fails to offer sufficient advantage over a uniform liner to warrant their use except in low frequency single mode application.

  7. Automatic temporal segment detection via bilateral long short-term memory recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun; Li, Liandong

    2017-03-01

    Constrained by the physiology, the temporal factors associated with human behavior, irrespective of facial movement or body gesture, are described by four phases: neutral, onset, apex, and offset. Although they may benefit related recognition tasks, it is not easy to accurately detect such temporal segments. An automatic temporal segment detection framework using bilateral long short-term memory recurrent neural networks (BLSTM-RNN) to learn high-level temporal-spatial features, which synthesizes the local and global temporal-spatial information more efficiently, is presented. The framework is evaluated in detail over the face and body database (FABO). The comparison shows that the proposed framework outperforms state-of-the-art methods for solving the problem of temporal segment detection.

  8. Timing Embryo Segmentation: Dynamics and Regulatory Mechanisms of the Vertebrate Segmentation Clock

    PubMed Central

    Resende, Tatiana P.; Andrade, Raquel P.; Palmeirim, Isabel

    2014-01-01

    All vertebrate species present a segmented body, easily observed in the vertebrate column and its associated components, which provides a high degree of motility to the adult body and efficient protection of the internal organs. The sequential formation of the segmented precursors of the vertebral column during embryonic development, the somites, is governed by an oscillating genetic network, the somitogenesis molecular clock. Herein, we provide an overview of the molecular clock operating during somite formation and its underlying molecular regulatory mechanisms. Human congenital vertebral malformations have been associated with perturbations in these oscillatory mechanisms. Thus, a better comprehension of the molecular mechanisms regulating somite formation is required in order to fully understand the origin of human skeletal malformations. PMID:24895605

  9. Solar harvesting by a heterostructured cell with built-in variable width quantum wells

    NASA Astrophysics Data System (ADS)

    Brooks, W.; Wang, H.; Mil'shtein, S.

    2018-02-01

    We propose cascaded heterostructured p-i-n solar cells, where inside of the i-region is a set of Quantum Wells (QWs) with variable thicknesses to enhance absorption of different photonic energies and provide quick relaxation for high energy carriers. Our p-i-n heterostructure carries top p-type and bottom n-type 11.3 Å thick AlAs layers, which are doped by acceptors and donor densities up to 1019/cm3. The intrinsic region is divided into 10 segments where each segment carries ten QWs of the same width and the width of the QWs in each subsequent segment gradually increases. The top segment consists of 10 QWs with widths of 56.5Å, followed by a segment with 10 wider QWs with widths of 84.75Å, followed by increasing QW widths until the last segment has 10 QWs with widths of 565Å, bringing the total number of QWs to 100. The QW wall height is controlled by alternating AlAs and GaAs layers, where the AlAs layers are all 11.3Å thick, throughout the entire intrinsic region. Configuration of variable width QWs prescribes sets of energy levels which are suitable for absorption of a wide range of photon energies and will dissipate high electron-hole energies rapidly, reducing the heat load on the solar cell. We expect that the heating of the solar cell will be reduced by 8-11%, enhancing efficiency. The efficiency of the designed solar cell is 43.71%, the Fill Factor is 0.86, the density of short circuit current (ISC) will not exceed 338 A/m2 and the open circuit voltage (VOC) is 1.51V.

  10. Measured and simulated performance of Compton-suppressed TIGRESS HPGe clover detectors

    NASA Astrophysics Data System (ADS)

    Schumaker, M. A.; Hackman, G.; Pearson, C. J.; Svensson, C. E.; Andreoiu, C.; Andreyev, A.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D.; Boston, A. J.; Chakrawarthy, R. S.; Churchman, R.; Drake, T. E.; Finlay, P.; Garrett, P. E.; Grinyer, G. F.; Hyland, B.; Jones, B.; Maharaj, R.; Morton, A. C.; Phillips, A. A.; Sarazin, F.; Scraggs, H. C.; Smith, M. B.; Valiente-Dobón, J. J.; Waddington, J. C.; Watters, L. M.

    2007-01-01

    Tests of the performance of a 32-fold segmented HPGe clover detector coupled to a 20-fold segmented Compton-suppression shield, which form a prototype element of the TRIUMF-ISAC Gamma-Ray Escape-Suppressed Spectrometer (TIGRESS), have been made. Peak-to-total ratios and relative efficiencies have been measured for a variety of γ-ray energies. These measurements were used to validate a GEANT4 simulation of the TIGRESS detectors, which was then used to create a simulation of the full 12-detector array. Predictions of the expected performance of TIGRESS are presented. These predictions indicate that TIGRESS will be capable, for single 1 MeV γ rays, of absolute detection efficiencies of 17% and 9.4%, and peak-to-total ratios of 54% and 61% for the "high-efficiency" and "optimized peak-to-total" configurations of the array, respectively.

  11. TIGRESS highly-segmented high-purity germanium clover detector

    NASA Astrophysics Data System (ADS)

    Scraggs, H. C.; Pearson, C. J.; Hackman, G.; Smith, M. B.; Austin, R. A. E.; Ball, G. C.; Boston, A. J.; Bricault, P.; Chakrawarthy, R. S.; Churchman, R.; Cowan, N.; Cronkhite, G.; Cunningham, E. S.; Drake, T. E.; Finlay, P.; Garrett, P. E.; Grinyer, G. F.; Hyland, B.; Jones, B.; Leslie, J. R.; Martin, J.-P.; Morris, D.; Morton, A. C.; Phillips, A. A.; Sarazin, F.; Schumaker, M. A.; Svensson, C. E.; Valiente-Dobón, J. J.; Waddington, J. C.; Watters, L. M.; Zimmerman, L.

    2005-05-01

    The TRIUMF-ISAC Gamma-Ray Escape-Suppressed Spectrometer (TIGRESS) will consist of twelve units of four high-purity germanium (HPGe) crystals in a common cryostat. The outer contacts of each crystal will be divided into four quadrants and two lateral segments for a total of eight outer contacts. The performance of a prototype HPGe four-crystal unit has been investigated. Integrated noise spectra for all contacts were measured. Energy resolutions, relative efficiencies for both individual crystals and for the entire unit, and peak-to-total ratios were measured with point-like sources. Position-dependent performance was measured by moving a collimated source across the face of the detector.

  12. Efficient Algorithms for Segmentation of Item-Set Time Series

    NASA Astrophysics Data System (ADS)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  13. Efficient segmentation of 3D fluoroscopic datasets from mobile C-arm

    NASA Astrophysics Data System (ADS)

    Styner, Martin A.; Talib, Haydar; Singh, Digvijay; Nolte, Lutz-Peter

    2004-05-01

    The emerging mobile fluoroscopic 3D technology linked with a navigation system combines the advantages of CT-based and C-arm-based navigation. The intra-operative, automatic segmentation of 3D fluoroscopy datasets enables the combined visualization of surgical instruments and anatomical structures for enhanced planning, surgical eye-navigation and landmark digitization. We performed a thorough evaluation of several segmentation algorithms using a large set of data from different anatomical regions and man-made phantom objects. The analyzed segmentation methods include automatic thresholding, morphological operations, an adapted region growing method and an implicit 3D geodesic snake method. In regard to computational efficiency, all methods performed within acceptable limits on a standard Desktop PC (30sec-5min). In general, the best results were obtained with datasets from long bones, followed by extremities. The segmentations of spine, pelvis and shoulder datasets were generally of poorer quality. As expected, the threshold-based methods produced the worst results. The combined thresholding and morphological operations methods were considered appropriate for a smaller set of clean images. The region growing method performed generally much better in regard to computational efficiency and segmentation correctness, especially for datasets of joints, and lumbar and cervical spine regions. The less efficient implicit snake method was able to additionally remove wrongly segmented skin tissue regions. This study presents a step towards efficient intra-operative segmentation of 3D fluoroscopy datasets, but there is room for improvement. Next, we plan to study model-based approaches for datasets from the knee and hip joint region, which would be thenceforth applied to all anatomical regions in our continuing development of an ideal segmentation procedure for 3D fluoroscopic images.

  14. Semi-automatic segmentation of brain tumors using population and individual information.

    PubMed

    Wu, Yao; Yang, Wei; Jiang, Jun; Li, Shuanqian; Feng, Qianjin; Chen, Wufan

    2013-08-01

    Efficient segmentation of tumors in medical images is of great practical importance in early diagnosis and radiation plan. This paper proposes a novel semi-automatic segmentation method based on population and individual statistical information to segment brain tumors in magnetic resonance (MR) images. First, high-dimensional image features are extracted. Neighborhood components analysis is proposed to learn two optimal distance metrics, which contain population and patient-specific information, respectively. The probability of each pixel belonging to the foreground (tumor) and the background is estimated by the k-nearest neighborhood classifier under the learned optimal distance metrics. A cost function for segmentation is constructed through these probabilities and is optimized using graph cuts. Finally, some morphological operations are performed to improve the achieved segmentation results. Our dataset consists of 137 brain MR images, including 68 for training and 69 for testing. The proposed method overcomes segmentation difficulties caused by the uneven gray level distribution of the tumors and even can get satisfactory results if the tumors have fuzzy edges. Experimental results demonstrate that the proposed method is robust to brain tumor segmentation.

  15. TH-CD-207B-06: Swank Factor of Segmented Scintillators in Multi-Slice CT Detectors: Pulse Height Spectra and Light Escape

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howansky, A; Peng, B; Lubinsky, A

    Purpose: Pulse height spectra (PHS) have been used to determine the Swank factor of a scintillator by measuring fluctuations in its light output per x-ray interaction. The Swank factor and x-ray quantum efficiency of a scintillator define the upper limit to its imaging performance, i.e. DQE(0). The Swank factor below the K-edge is dominated by optical properties, i.e. variations in light escape efficiency from different depths of interaction, denoted e(z). These variations can be optimized to improve tradeoffs in x-ray absorption, light yield, and spatial resolution. This work develops a quantitative model for interpreting measured PHS, and estimating e(z) onmore » an absolute scale. The method is used to investigate segmented ceramic GOS scintillators used in multi-slice CT detectors. Methods: PHS of a ceramic GOS plate (1 mm thickness) and segmented GOS array (1.4 mm thick) were measured at 46 keV. Signal and noise propagation through x-ray conversion gain, light escape, detection by a photomultiplier tube and dynode amplification were modeled using a cascade of stochastic gain stages. PHS were calculated with these expressions and compared to measurements. Light escape parameters were varied until modeled PHS agreed with measurements. The resulting estimates of e(z) were used to calculate PHS without measurement noise to determine the inherent Swank factor. Results: The variation in e(z) was 67.2–89.7% in the plate and 40.2–70.8% in the segmented sample, corresponding to conversion gains of 28.6–38.1 keV{sup −1} and 17.1–30.1 keV{sup −1}, respectively. The inherent Swank factors of the plate and segmented sample were 0.99 and 0.95, respectively. Conclusion: The high light escape efficiency in the ceramic GOS samples yields high Swank factors and DQE(0) in CT applications. The PHS model allows the intrinsic optical properties of scintillators to be deduced from PHS measurements, thus it provides new insights for evaluating the imaging performance of segmented ceramic GOS scintillators.« less

  16. Classification with an edge: Improving semantic image segmentation with boundary detection

    NASA Astrophysics Data System (ADS)

    Marmanis, D.; Schindler, K.; Wegner, J. D.; Galliani, S.; Datcu, M.; Stilla, U.

    2018-01-01

    We present an end-to-end trainable deep convolutional neural network (DCNN) for semantic segmentation with built-in awareness of semantically meaningful boundaries. Semantic segmentation is a fundamental remote sensing task, and most state-of-the-art methods rely on DCNNs as their workhorse. A major reason for their success is that deep networks learn to accumulate contextual information over very large receptive fields. However, this success comes at a cost, since the associated loss of effective spatial resolution washes out high-frequency details and leads to blurry object boundaries. Here, we propose to counter this effect by combining semantic segmentation with semantically informed edge detection, thus making class boundaries explicit in the model. First, we construct a comparatively simple, memory-efficient model by adding boundary detection to the SEGNET encoder-decoder architecture. Second, we also include boundary detection in FCN-type models and set up a high-end classifier ensemble. We show that boundary detection significantly improves semantic segmentation with CNNs in an end-to-end training scheme. Our best model achieves >90% overall accuracy on the ISPRS Vaihingen benchmark.

  17. Simultaneous 3D segmentation of three bone compartments on high resolution knee MR images from osteoarthritis initiative (OAI) using graph cuts

    NASA Astrophysics Data System (ADS)

    Shim, Hackjoon; Kwoh, C. Kent; Yun, Il Dong; Lee, Sang Uk; Bae, Kyongtae

    2009-02-01

    Osteoarthritis (OA) is associated with degradation of cartilage and related changes in the underlying bone. Quantitative measurement of those changes from MR images is an important biomarker to study the progression of OA and it requires a reliable segmentation of knee bone and cartilage. As the most popular method, manual segmentation of knee joint structures by boundary delineation is highly laborious and subject to user-variation. To overcome these difficulties, we have developed a semi-automated method for segmentation of knee bones, which consisted of two steps: placement of seeds and computation of segmentation. In the first step, seeds were placed by the user on a number of slices and then were propagated automatically to neighboring images. The seed placement could be performed on any of sagittal, coronal, and axial planes. The second step, computation of segmentation, was based on a graph-cuts algorithm where the optimal segmentation is the one that minimizes a cost function, which integrated the seeds specified by the user and both the regional and boundary properties of the regions to be segmented. The algorithm also allows simultaneous segmentation of three compartments of the knee bone (femur, tibia, patella). Our method was tested on the knee MR images of six subjects from the osteoarthritis initiative (OAI). The segmentation processing time (mean+/-SD) was (22+/-4)min, which is much shorter than that by the manual boundary delineation method (typically several hours). With this improved efficiency, our segmentation method will facilitate the quantitative morphologic analysis of changes in knee bones associated with osteoarthritis.

  18. A Fast Method for the Segmentation of Synaptic Junctions and Mitochondria in Serial Electron Microscopic Images of the Brain.

    PubMed

    Márquez Neila, Pablo; Baumela, Luis; González-Soriano, Juncal; Rodríguez, Jose-Rodrigo; DeFelipe, Javier; Merchán-Pérez, Ángel

    2016-04-01

    Recent electron microscopy (EM) imaging techniques permit the automatic acquisition of a large number of serial sections from brain samples. Manual segmentation of these images is tedious, time-consuming and requires a high degree of user expertise. Therefore, there is considerable interest in developing automatic segmentation methods. However, currently available methods are computationally demanding in terms of computer time and memory usage, and to work properly many of them require image stacks to be isotropic, that is, voxels must have the same size in the X, Y and Z axes. We present a method that works with anisotropic voxels and that is computationally efficient allowing the segmentation of large image stacks. Our approach involves anisotropy-aware regularization via conditional random field inference and surface smoothing techniques to improve the segmentation and visualization. We have focused on the segmentation of mitochondria and synaptic junctions in EM stacks from the cerebral cortex, and have compared the results to those obtained by other methods. Our method is faster than other methods with similar segmentation results. Our image regularization procedure introduces high-level knowledge about the structure of labels. We have also reduced memory requirements with the introduction of energy optimization in overlapping partitions, which permits the regularization of very large image stacks. Finally, the surface smoothing step improves the appearance of three-dimensional renderings of the segmented volumes.

  19. High-efficiency and low-background multi-segmented proportional gas counter for β-decay spectroscopy

    NASA Astrophysics Data System (ADS)

    Mukai, M.; Hirayama, Y.; Watanabe, Y. X.; Schury, P.; Jung, H. S.; Ahmed, M.; Haba, H.; Ishiyama, H.; Jeong, S. C.; Kakiguchi, Y.; Kimura, S.; Moon, J. Y.; Oyaizu, M.; Ozawa, A.; Park, J. H.; Ueno, H.; Wada, M.; Miyatake, H.

    2018-03-01

    A multi-segmented proportional gas counter (MSPGC) with high detection efficiency and low-background event rate has been developed for β-decay spectroscopy. The MSPGC consists of two cylindrically aligned layers of 16 counters (32 counters in total). Each counter has a long active length and small trapezoidal cross-section, and the total solid angle of the 32 counters is 80% of 4 π. β-rays are distinguished from the background events including cosmic-rays by analyzing the hit patterns of independent counters. The deduced intrinsic detection efficiency of each counter was almost 100%. The measured background event rate was 0.11 counts per second using the combination of veto counters for cosmic-rays and lead block shields for background γ-rays. The MSPGC was applied to measure the β-decay half-lives of 198Ir and 199mPt. The evaluated half-lives of T1/2 = 9 . 8(7) s and 12.4(7) s for 198Ir and 199mPt, respectively, were in agreement with previously reported values. The estimated absolute detection efficiency of the MSPGC from GEANT4 simulations was consistent with the evaluated efficiency from the analysis of the β- γ spectroscopy of 199Pt, saturating at approximately 60% for Qβ > 4 MeV.

  20. Compatibility of Segments of Thermoelectric Generators

    NASA Technical Reports Server (NTRS)

    Snyder, G. Jeffrey; Ursell, Tristan

    2009-01-01

    A method of calculating (usually for the purpose of maximizing) the power-conversion efficiency of a segmented thermoelectric generator is based on equations derived from the fundamental equations of thermoelectricity. Because it is directly traceable to first principles, the method provides physical explanations in addition to predictions of phenomena involved in segmentation. In comparison with the finite-element method used heretofore to predict (without being able to explain) the behavior of a segmented thermoelectric generator, this method is much simpler to implement in practice: in particular, the efficiency of a segmented thermoelectric generator can be estimated by evaluating equations using only hand-held calculator with this method. In addition, the method provides for determination of cascading ratios. The concept of cascading is illustrated in the figure and the definition of the cascading ratio is defined in the figure caption. An important aspect of the method is its approach to the issue of compatibility among segments, in combination with introduction of the concept of compatibility within a segment. Prior approaches involved the use of only averaged material properties. Two materials in direct contact could be examined for compatibility with each other, but there was no general framework for analysis of compatibility. The present method establishes such a framework. The mathematical derivation of the method begins with the definition of reduced efficiency of a thermoelectric generator as the ratio between (1) its thermal-to-electric power-conversion efficiency and (2) its Carnot efficiency (the maximum efficiency theoretically attainable, given its hot- and cold-side temperatures). The derivation involves calculation of the reduced efficiency of a model thermoelectric generator for which the hot-side temperature is only infinitesimally greater than the cold-side temperature. The derivation includes consideration of the ratio (u) between the electric current and heat-conduction power and leads to the concept of compatibility factor (s) for a given thermoelectric material, defined as the value of u that maximizes the reduced efficiency of the aforementioned model thermoelectric generator.

  1. A segmentation editing framework based on shape change statistics

    NASA Astrophysics Data System (ADS)

    Mostapha, Mahmoud; Vicory, Jared; Styner, Martin; Pizer, Stephen

    2017-02-01

    Segmentation is a key task in medical image analysis because its accuracy significantly affects successive steps. Automatic segmentation methods often produce inadequate segmentations, which require the user to manually edit the produced segmentation slice by slice. Because editing is time-consuming, an editing tool that enables the user to produce accurate segmentations by only drawing a sparse set of contours would be needed. This paper describes such a framework as applied to a single object. Constrained by the additional information enabled by the manually segmented contours, the proposed framework utilizes object shape statistics to transform the failed automatic segmentation to a more accurate version. Instead of modeling the object shape, the proposed framework utilizes shape change statistics that were generated to capture the object deformation from the failed automatic segmentation to its corresponding correct segmentation. An optimization procedure was used to minimize an energy function that consists of two terms, an external contour match term and an internal shape change regularity term. The high accuracy of the proposed segmentation editing approach was confirmed by testing it on a simulated data set based on 10 in-vivo infant magnetic resonance brain data sets using four similarity metrics. Segmentation results indicated that our method can provide efficient and adequately accurate segmentations (Dice segmentation accuracy increase of 10%), with very sparse contours (only 10%), which is promising in greatly decreasing the work expected from the user.

  2. Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.

    PubMed

    Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence

    2012-12-01

    A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.

  3. Accelerated Gaussian mixture model and its application on image segmentation

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhui; Zhang, Yuanyuan; Ding, Yihua; Long, Chengjiang; Yuan, Zhiyong; Zhang, Dengyi

    2013-03-01

    Gaussian mixture model (GMM) has been widely used for image segmentation in recent years due to its superior adaptability and simplicity of implementation. However, traditional GMM has the disadvantage of high computational complexity. In this paper an accelerated GMM is designed, for which the following approaches are adopted: establish the lookup table for Gaussian probability matrix to avoid the repetitive probability calculations on all pixels, employ the blocking detection method on each block of pixels to further decrease the complexity, change the structure of lookup table from 3D to 1D with more simple data type to reduce the space requirement. The accelerated GMM is applied on image segmentation with the help of OTSU method to decide the threshold value automatically. Our algorithm has been tested through image segmenting of flames and faces from a set of real pictures, and the experimental results prove its efficiency in segmentation precision and computational cost.

  4. Concepts and design of the CMS high granularity calorimeter Level-1 trigger

    NASA Astrophysics Data System (ADS)

    Sauvan, Jean-Baptiste; CMS Collaboration

    2017-11-01

    The CMS experiment has chosen a novel high granularity calorimeter for the forward region as part of its planned upgrade for the high luminosity LHC. The calorimeter will have a fine segmentation in both the transverse and longitudinal directions and will be the first such calorimeter specifically optimised for particle flow reconstruction to operate at a colliding beam experiment. The high granularity results in around six million readout channels in total and so presents a significant challenge in terms of data manipulation and processing for the trigger; the trigger data volumes will be an order of magnitude above those currently handled at CMS. In addition, the high luminosity will result in an average of 140 to 200 interactions per bunch crossing, giving a huge background rate in the forward region that needs to be efficiently reduced by the trigger algorithms. Efficient data reduction and reconstruction algorithms making use of the fine segmentation of the detector have been simulated and evaluated. They provide an increase of the trigger rates with the luminosity significantly smaller than would be expected with the current trigger system.

  5. A comparison study of atlas-based 3D cardiac MRI segmentation: global versus global and local transformations

    NASA Astrophysics Data System (ADS)

    Daryanani, Aditya; Dangi, Shusil; Ben-Zikri, Yehuda Kfir; Linte, Cristian A.

    2016-03-01

    Magnetic Resonance Imaging (MRI) is a standard-of-care imaging modality for cardiac function assessment and guidance of cardiac interventions thanks to its high image quality and lack of exposure to ionizing radiation. Cardiac health parameters such as left ventricular volume, ejection fraction, myocardial mass, thickness, and strain can be assessed by segmenting the heart from cardiac MRI images. Furthermore, the segmented pre-operative anatomical heart models can be used to precisely identify regions of interest to be treated during minimally invasive therapy. Hence, the use of accurate and computationally efficient segmentation techniques is critical, especially for intra-procedural guidance applications that rely on the peri-operative segmentation of subject-specific datasets without delaying the procedure workflow. Atlas-based segmentation incorporates prior knowledge of the anatomy of interest from expertly annotated image datasets. Typically, the ground truth atlas label is propagated to a test image using a combination of global and local registration. The high computational cost of non-rigid registration motivated us to obtain an initial segmentation using global transformations based on an atlas of the left ventricle from a population of patient MRI images and refine it using well developed technique based on graph cuts. Here we quantitatively compare the segmentations obtained from the global and global plus local atlases and refined using graph cut-based techniques with the expert segmentations according to several similarity metrics, including Dice correlation coefficient, Jaccard coefficient, Hausdorff distance, and Mean absolute distance error.

  6. The Dipole Segment Model for Axisymmetrical Elongated Asteroids

    NASA Astrophysics Data System (ADS)

    Zeng, Xiangyuan; Zhang, Yonglong; Yu, Yang; Liu, Xiangdong

    2018-02-01

    Various simplified models have been investigated as a way to understand the complex dynamical environment near irregular asteroids. A dipole segment model is explored in this paper, one that is composed of a massive straight segment and two point masses at the extremities of the segment. Given an explicitly simple form of the potential function that is associated with the dipole segment model, five topological cases are identified with different sets of system parameters. Locations, stabilities, and variation trends of the system equilibrium points are investigated in a parametric way. The exterior potential distribution of nearly axisymmetrical elongated asteroids is approximated by minimizing the acceleration error in a test zone. The acceleration error minimization process determines the parameters of the dipole segment. The near-Earth asteroid (8567) 1996 HW1 is chosen as an example to evaluate the effectiveness of the approximation method for the exterior potential distribution. The advantages of the dipole segment model over the classical dipole and the traditional segment are also discussed. Percent error of acceleration and the degree of approximation are illustrated by using the dipole segment model to approximate four more asteroids. The high efficiency of the simplified model over the polyhedron is clearly demonstrated by comparing the CPU time.

  7. Parallel and series FED microstrip array with high efficiency and low cross polarization

    NASA Technical Reports Server (NTRS)

    Huang, John (Inventor)

    1995-01-01

    A microstrip array antenna for vertically polarized fan beam (approximately 2 deg x 50 deg) for C-band SAR applications with a physical area of 1.7 m by 0.17 m comprises two rows of patch elements and employs a parallel feed to left- and right-half sections of the rows. Each section is divided into two segments that are fed in parallel with the elements in each segment fed in series through matched transmission lines for high efficiency. The inboard section has half the number of patch elements of the outboard section, and the outboard sections, which have tapered distribution with identical transmission line sections, terminated with half wavelength long open-circuit stubs so that the remaining energy is reflected and radiated in phase. The elements of the two inboard segments of the two left- and right-half sections are provided with tapered transmission lines from element to element for uniform power distribution over the central third of the entire array antenna. The two rows of array elements are excited at opposite patch feed locations with opposite (180 deg difference) phases for reduced cross-polarization.

  8. Computed tomography landmark-based semi-automated mesh morphing and mapping techniques: generation of patient specific models of the human pelvis without segmentation.

    PubMed

    Salo, Zoryana; Beek, Maarten; Wright, David; Whyne, Cari Marisa

    2015-04-13

    Current methods for the development of pelvic finite element (FE) models generally are based upon specimen specific computed tomography (CT) data. This approach has traditionally required segmentation of CT data sets, which is time consuming and necessitates high levels of user intervention due to the complex pelvic anatomy. The purpose of this research was to develop and assess CT landmark-based semi-automated mesh morphing and mapping techniques to aid the generation and mechanical analysis of specimen-specific FE models of the pelvis without the need for segmentation. A specimen-specific pelvic FE model (source) was created using traditional segmentation methods and morphed onto a CT scan of a different (target) pelvis using a landmark-based method. The morphed model was then refined through mesh mapping by moving the nodes to the bone boundary. A second target model was created using traditional segmentation techniques. CT intensity based material properties were assigned to the morphed/mapped model and to the traditionally segmented target models. Models were analyzed to evaluate their geometric concurrency and strain patterns. Strains generated in a double-leg stance configuration were compared to experimental strain gauge data generated from the same target cadaver pelvis. CT landmark-based morphing and mapping techniques were efficiently applied to create a geometrically multifaceted specimen-specific pelvic FE model, which was similar to the traditionally segmented target model and better replicated the experimental strain results (R(2)=0.873). This study has shown that mesh morphing and mapping represents an efficient validated approach for pelvic FE model generation without the need for segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    NASA Astrophysics Data System (ADS)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  10. Stability of local secondary structure determines selectivity of viral RNA chaperones.

    PubMed

    Bravo, Jack P K; Borodavka, Alexander; Barth, Anders; Calabrese, Antonio N; Mojzes, Peter; Cockburn, Joseph J B; Lamb, Don C; Tuma, Roman

    2018-05-18

    To maintain genome integrity, segmented double-stranded RNA viruses of the Reoviridae family must accurately select and package a complete set of up to a dozen distinct genomic RNAs. It is thought that the high fidelity segmented genome assembly involves multiple sequence-specific RNA-RNA interactions between single-stranded RNA segment precursors. These are mediated by virus-encoded non-structural proteins with RNA chaperone-like activities, such as rotavirus (RV) NSP2 and avian reovirus σNS. Here, we compared the abilities of NSP2 and σNS to mediate sequence-specific interactions between RV genomic segment precursors. Despite their similar activities, NSP2 successfully promotes inter-segment association, while σNS fails to do so. To understand the mechanisms underlying such selectivity in promoting inter-molecular duplex formation, we compared RNA-binding and helix-unwinding activities of both proteins. We demonstrate that octameric NSP2 binds structured RNAs with high affinity, resulting in efficient intramolecular RNA helix disruption. Hexameric σNS oligomerizes into an octamer that binds two RNAs, yet it exhibits only limited RNA-unwinding activity compared to NSP2. Thus, the formation of intersegment RNA-RNA interactions is governed by both helix-unwinding capacity of the chaperones and stability of RNA structure. We propose that this protein-mediated RNA selection mechanism may underpin the high fidelity assembly of multi-segmented RNA genomes in Reoviridae.

  11. PRESEE: An MDL/MML Algorithm to Time-Series Stream Segmenting

    PubMed Central

    Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream. PMID:23956693

  12. PRESEE: an MDL/MML algorithm to time-series stream segmenting.

    PubMed

    Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.

  13. Energy efficient engine sector combustor rig test program

    NASA Technical Reports Server (NTRS)

    Dubiel, D. J.; Greene, W.; Sundt, C. V.; Tanrikut, S.; Zeisser, M. H.

    1981-01-01

    Under the NASA-sponsored Energy Efficient Engine program, Pratt & Whitney Aircraft has successfully completed a comprehensive combustor rig test using a 90-degree sector of an advanced two-stage combustor with a segmented liner. Initial testing utilized a combustor with a conventional louvered liner and demonstrated that the Energy Efficient Engine two-stage combustor configuration is a viable system for controlling exhaust emissions, with the capability to meet all aerothermal performance goals. Goals for both carbon monoxide and unburned hydrocarbons were surpassed and the goal for oxides of nitrogen was closely approached. In another series of tests, an advanced segmented liner configuration with a unique counter-parallel FINWALL cooling system was evaluated at engine sea level takeoff pressure and temperature levels. These tests verified the structural integrity of this liner design. Overall, the results from the program have provided a high level of confidence to proceed with the scheduled Combustor Component Rig Test Program.

  14. Highly Efficient Vector-Inversion Pulse Generators

    NASA Technical Reports Server (NTRS)

    Rose, Franklin

    2004-01-01

    Improved transmission-line pulse generators of the vector-inversion type are being developed as lightweight sources of pulsed high voltage for diverse applications, including spacecraft thrusters, portable x-ray imaging systems, impulse radar systems, and corona-discharge systems for sterilizing gases. In this development, more than the customary attention is paid to principles of operation and details of construction so as to the maximize the efficiency of the pulse-generation process while minimizing the sizes of components. An important element of this approach is segmenting a pulse generator in such a manner that the electric field in each segment is always below the threshold for electrical breakdown. One design of particular interest, a complete description of which was not available at the time of writing this article, involves two parallel-plate transmission lines that are wound on a mandrel, share a common conductor, and are switched in such a manner that the pulse generator is divided into a "fast" and a "slow" section. A major innovation in this design is the addition of ferrite to the "slow" section to reduce the size of the mandrel needed for a given efficiency.

  15. Automated brain tumor segmentation in magnetic resonance imaging based on sliding-window technique and symmetry analysis.

    PubMed

    Lian, Yanyun; Song, Zhijian

    2014-01-01

    Brain tumor segmentation from magnetic resonance imaging (MRI) is an important step toward surgical planning, treatment planning, monitoring of therapy. However, manual tumor segmentation commonly used in clinic is time-consuming and challenging, and none of the existed automated methods are highly robust, reliable and efficient in clinic application. An accurate and automated tumor segmentation method has been developed for brain tumor segmentation that will provide reproducible and objective results close to manual segmentation results. Based on the symmetry of human brain, we employed sliding-window technique and correlation coefficient to locate the tumor position. At first, the image to be segmented was normalized, rotated, denoised, and bisected. Subsequently, through vertical and horizontal sliding-windows technique in turn, that is, two windows in the left and the right part of brain image moving simultaneously pixel by pixel in two parts of brain image, along with calculating of correlation coefficient of two windows, two windows with minimal correlation coefficient were obtained, and the window with bigger average gray value is the location of tumor and the pixel with biggest gray value is the locating point of tumor. At last, the segmentation threshold was decided by the average gray value of the pixels in the square with center at the locating point and 10 pixels of side length, and threshold segmentation and morphological operations were used to acquire the final tumor region. The method was evaluated on 3D FSPGR brain MR images of 10 patients. As a result, the average ratio of correct location was 93.4% for 575 slices containing tumor, the average Dice similarity coefficient was 0.77 for one scan, and the average time spent on one scan was 40 seconds. An fully automated, simple and efficient segmentation method for brain tumor is proposed and promising for future clinic use. Correlation coefficient is a new and effective feature for tumor location.

  16. Application of an enhanced fuzzy algorithm for MR brain tumor image segmentation

    NASA Astrophysics Data System (ADS)

    Hemanth, D. Jude; Vijila, C. Kezi Selva; Anitha, J.

    2010-02-01

    Image segmentation is one of the significant digital image processing techniques commonly used in the medical field. One of the specific applications is tumor detection in abnormal Magnetic Resonance (MR) brain images. Fuzzy approaches are widely preferred for tumor segmentation which generally yields superior results in terms of accuracy. But most of the fuzzy algorithms suffer from the drawback of slow convergence rate which makes the system practically non-feasible. In this work, the application of modified Fuzzy C-means (FCM) algorithm to tackle the convergence problem is explored in the context of brain image segmentation. This modified FCM algorithm employs the concept of quantization to improve the convergence rate besides yielding excellent segmentation efficiency. This algorithm is experimented on real time abnormal MR brain images collected from the radiologists. A comprehensive feature vector is extracted from these images and used for the segmentation technique. An extensive feature selection process is performed which reduces the convergence time period and improve the segmentation efficiency. After segmentation, the tumor portion is extracted from the segmented image. Comparative analysis in terms of segmentation efficiency and convergence rate is performed between the conventional FCM and the modified FCM. Experimental results show superior results for the modified FCM algorithm in terms of the performance measures. Thus, this work highlights the application of the modified algorithm for brain tumor detection in abnormal MR brain images.

  17. Novel red phosphorescent polymers bearing both ambipolar and functionalized Ir(III) phosphorescent moieties for highly efficient organic light-emitting diodes.

    PubMed

    Zhao, Jiang; Lian, Meng; Yu, Yue; Yan, Xiaogang; Xu, Xianbin; Yang, Xiaolong; Zhou, Guijiang; Wu, Zhaoxin

    2015-01-01

    A series of novel red phosphorescent polymers is successfully developed through Suzuki cross-coupling among ambipolar units, functionalized Ir(III) phosphorescent blocks, and fluorene-based silane moieties. The photophysical and electrochemical investigations indicate not only highly efficient energy-transfer from the organic segments to the phosphorescent units in the polymer backbone but also the ambipolar character of the copolymers. Benefiting from all these merits, the phosphorescent polymers can furnish organic light-emitting diodes (OLEDs) with exceptional high electroluminescent (EL) efficiencies with a current efficiency (η L ) of 8.31 cd A(-1) , external quantum efficiency (η ext ) of 16.07%, and power efficiency (η P ) of 2.95 lm W(-1) , representing the state-of-the-art electroluminescent performances ever achieved by red phosphorescent polymers. This work here might represent a new pathway to design and synthesize highly efficient phosphorescent polymers. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Efficiency equations of the railgun

    NASA Astrophysics Data System (ADS)

    Sadedin, D. R.

    1984-03-01

    The feasibility of an employment of railguns for large scale applications, such as space launching, will ultimately be determined by efficiency considerations. The present investigation is concerned with the calculation of the efficiencies for constant current railguns. Elementary considerations are discussed, taking into account a simple condition for high efficiency, the magnetic field of the rails, and the acceleration force on the projectile. The loss in a portion of the rails is considered along with rail loss comparisons, applications to the segmented gun, rail losses related to the constant resistance per unit length, efficiency expressions, and arc, or muzzle voltage energy.

  19. Estimation procedure of the efficiency of the heat network segment

    NASA Astrophysics Data System (ADS)

    Polivoda, F. A.; Sokolovskii, R. I.; Vladimirov, M. A.; Shcherbakov, V. P.; Shatrov, L. A.

    2017-07-01

    An extensive city heat network contains many segments, and each segment operates with different efficiency of heat energy transfer. This work proposes an original technical approach; it involves the evaluation of the energy efficiency function of the heat network segment and interpreting of two hyperbolic functions in the form of the transcendental equation. In point of fact, the problem of the efficiency change of the heat network depending on the ambient temperature was studied. Criteria dependences used for evaluation of the set segment efficiency of the heat network and finding of the parameters for the most optimal control of the heat supply process of the remote users were inferred with the help of the functional analysis methods. Generally, the efficiency function of the heat network segment is interpreted by the multidimensional surface, which allows illustrating it graphically. It was shown that the solution of the inverse problem is possible as well. Required consumption of the heating agent and its temperature may be found by the set segment efficient and ambient temperature; requirements to heat insulation and pipe diameters may be formulated as well. Calculation results were received in a strict analytical form, which allows investigating the found functional dependences for availability of the extremums (maximums) under the set external parameters. A conclusion was made that it is expedient to apply this calculation procedure in two practically important cases: for the already made (built) network, when the change of the heat agent consumption and temperatures in the pipe is only possible, and for the projecting (under construction) network, when introduction of changes into the material parameters of the network is possible. This procedure allows clarifying diameter and length of the pipes, types of insulation, etc. Length of the pipes may be considered as the independent parameter for calculations; optimization of this parameter is made in accordance with other, economical, criteria for the specific project.

  20. Embedded Implementation of VHR Satellite Image Segmentation

    PubMed Central

    Li, Chao; Balla-Arabé, Souleymane; Ginhac, Dominique; Yang, Fan

    2016-01-01

    Processing and analysis of Very High Resolution (VHR) satellite images provide a mass of crucial information, which can be used for urban planning, security issues or environmental monitoring. However, they are computationally expensive and, thus, time consuming, while some of the applications, such as natural disaster monitoring and prevention, require high efficiency performance. Fortunately, parallel computing techniques and embedded systems have made great progress in recent years, and a series of massively parallel image processing devices, such as digital signal processors or Field Programmable Gate Arrays (FPGAs), have been made available to engineers at a very convenient price and demonstrate significant advantages in terms of running-cost, embeddability, power consumption flexibility, etc. In this work, we designed a texture region segmentation method for very high resolution satellite images by using the level set algorithm and the multi-kernel theory in a high-abstraction C environment and realize its register-transfer level implementation with the help of a new proposed high-level synthesis-based design flow. The evaluation experiments demonstrate that the proposed design can produce high quality image segmentation with a significant running-cost advantage. PMID:27240370

  1. Fast vessel segmentation in retinal images using multi-scale enhancement and second-order local entropy

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Zamora, G.; Bauman, W.; Soliz, P.

    2012-03-01

    Retinal vasculature is one of the most important anatomical structures in digital retinal photographs. Accurate segmentation of retinal blood vessels is an essential task in automated analysis of retinopathy. This paper presents a new and effective vessel segmentation algorithm that features computational simplicity and fast implementation. This method uses morphological pre-processing to decrease the disturbance of bright structures and lesions before vessel extraction. Next, a vessel probability map is generated by computing the eigenvalues of the second derivatives of Gaussian filtered image at multiple scales. Then, the second order local entropy thresholding is applied to segment the vessel map. Lastly, a rule-based decision step, which measures the geometric shape difference between vessels and lesions is applied to reduce false positives. The algorithm is evaluated on the low-resolution DRIVE and STARE databases and the publicly available high-resolution image database from Friedrich-Alexander University Erlangen-Nuremberg, Germany). The proposed method achieved comparable performance to state of the art unsupervised vessel segmentation methods with a competitive faster speed on the DRIVE and STARE databases. For the high resolution fundus image database, the proposed algorithm outperforms an existing approach both on performance and speed. The efficiency and robustness make the blood vessel segmentation method described here suitable for broad application in automated analysis of retinal images.

  2. 3D conformal planning using low segment multi-criteria IMRT optimization

    PubMed Central

    Khan, Fazal; Craft, David

    2014-01-01

    Purpose To evaluate automated multicriteria optimization (MCO) – designed for intensity modulated radiation therapy (IMRT), but invoked with limited segmentation – to efficiently produce high quality 3D conformal radiation therapy (3D-CRT) plans. Methods Ten patients previously planned with 3D-CRT to various disease sites (brain, breast, lung, abdomen, pelvis), were replanned with a low-segment inverse multicriteria optimized technique. The MCO-3D plans used the same beam geometry of the original 3D plans, but were limited to an energy of 6 MV. The MCO-3D plans were optimized using fluence-based MCO IMRT and then, after MCO navigation, segmented with a low number of segments. The 3D and MCO-3D plans were compared by evaluating mean dose for all structures, D95 (dose that 95% of the structure receives) and homogeneity indexes for targets, D1 and clinically appropriate dose volume objectives for individual organs at risk (OARs), monitor units (MUs), and physician preference. Results The MCO-3D plans reduced the OAR mean doses (41 out of a total of 45 OARs had a mean dose reduction, p<<0.01) and monitor units (seven out of ten plans have reduced MUs; the average reduction is 17%, p=0.08) while maintaining clinical standards on coverage and homogeneity of target volumes. All MCO-3D plans were preferred by physicians over their corresponding 3D plans. Conclusion High quality 3D plans can be produced using MCO-IMRT optimization, resulting in automated field-in-field type plans with good monitor unit efficiency. Adopting this technology in a clinic could improve plan quality, and streamline treatment plan production by utilizing a single system applicable to both IMRT and 3D planning. PMID:25413405

  3. Multi-atlas propagation based left atrium segmentation coupled with super-voxel based pulmonary veins delineation in late gadolinium-enhanced cardiac MRI

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Zhuang, Xiahai; Khan, Habib; Haldar, Shouvik; Nyktari, Eva; Li, Lei; Ye, Xujiong; Slabaugh, Greg; Wong, Tom; Mohiaddin, Raad; Keegan, Jennifer; Firmin, David

    2017-02-01

    Late Gadolinium-Enhanced Cardiac MRI (LGE CMRI) is a non-invasive technique, which has shown promise in detecting native and post-ablation atrial scarring. To visualize the scarring, a precise segmentation of the left atrium (LA) and pulmonary veins (PVs) anatomy is performed as a first step—usually from an ECG gated CMRI roadmap acquisition—and the enhanced scar regions from the LGE CMRI images are superimposed. The anatomy of the LA and PVs in particular is highly variable and manual segmentation is labor intensive and highly subjective. In this paper, we developed a multi-atlas propagation based whole heart segmentation (WHS) to delineate the LA and PVs from ECG gated CMRI roadmap scans. While this captures the anatomy of the atrium well, the PVs anatomy is less easily visualized. The process is therefore augmented by semi-automated manual strokes for PVs identification in the registered LGE CMRI data. This allows us to extract more accurate anatomy than the fully automated WHS. Both qualitative visualization and quantitative assessment with respect to manual segmented ground truth showed that our method is efficient and effective with an overall mean Dice score of 0.91.

  4. Optimization of segmented thermoelectric generator using Taguchi and ANOVA techniques.

    PubMed

    Kishore, Ravi Anant; Sanghadasa, Mohan; Priya, Shashank

    2017-12-01

    Recent studies have demonstrated that segmented thermoelectric generators (TEGs) can operate over large thermal gradient and thus provide better performance (reported efficiency up to 11%) as compared to traditional TEGs, comprising of single thermoelectric (TE) material. However, segmented TEGs are still in early stages of development due to the inherent complexity in their design optimization and manufacturability. In this study, we demonstrate physics based numerical techniques along with Analysis of variance (ANOVA) and Taguchi optimization method for optimizing the performance of segmented TEGs. We have considered comprehensive set of design parameters, such as geometrical dimensions of p-n legs, height of segmentation, hot-side temperature, and load resistance, in order to optimize output power and efficiency of segmented TEGs. Using the state-of-the-art TE material properties and appropriate statistical tools, we provide near-optimum TEG configuration with only 25 experiments as compared to 3125 experiments needed by the conventional optimization methods. The effect of environmental factors on the optimization of segmented TEGs is also studied. Taguchi results are validated against the results obtained using traditional full factorial optimization technique and a TEG configuration for simultaneous optimization of power and efficiency is obtained.

  5. Nephron segment-specific gene expression using AAV vectors.

    PubMed

    Asico, Laureano D; Cuevas, Santiago; Ma, Xiaobo; Jose, Pedro A; Armando, Ines; Konkalmatt, Prasad R

    2018-02-26

    AAV9 vector provides efficient gene transfer in all segments of the renal nephron, with minimum expression in non-renal cells, when administered retrogradely via the ureter. It is important to restrict the transgene expression to the desired cell type within the kidney, so that the physiological endpoints represent the function of the transgene expressed in that specific cell type within kidney. We hypothesized that segment-specific gene expression within the kidney can be accomplished using the highly efficient AAV9 vectors carrying the promoters of genes that are expressed exclusively in the desired segment of the nephron in combination with administration by retrograde infusion into the kidney via the ureter. We constructed AAV vectors carrying eGFP under the control of: kidney-specific cadherin (KSPC) gene promoter for expression in the entire nephron; Na + /glucose co-transporter (SGLT2) gene promoter for expression in the S1 and S2 segments of the proximal tubule; sodium, potassium, 2 chloride co-transporter (NKCC2) gene promoter for expression in the thick ascending limb of Henle's loop (TALH); E-cadherin (ECAD) gene promoter for expression in the collecting duct (CD); and cytomegalovirus (CMV) early promoter that provides expression in most of the mammalian cells, as control. We tested the specificity of the promoter constructs in vitro for cell type-specific expression in mouse kidney cells in primary culture, followed by retrograde infusion of the AAV vectors via the ureter in the mouse. Our data show that AAV9 vector, in combination with the segment-specific promoters administered by retrograde infusion via the ureter, provides renal nephron segment-specific gene expression. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Multi-segment detector array for hybrid reflection-mode ultrasound and optoacoustic tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Merčep, Elena; Burton, Neal C.; Deán-Ben, Xosé Luís.; Razansky, Daniel

    2017-02-01

    The complementary contrast of the optoacoustic (OA) and pulse-echo ultrasound (US) modalities makes the combined usage of these imaging technologies highly advantageous. Due to the different physical contrast mechanisms development of a detector array optimally suited for both modalities is one of the challenges to efficient implementation of a single OA-US imaging device. We demonstrate imaging performance of the first hybrid detector array whose novel design, incorporating array segments of linear and concave geometry, optimally supports image acquisition in both reflection-mode ultrasonography and optoacoustic tomography modes. Hybrid detector array has a total number of 256 elements and three segments of different geometry and variable pitch size: a central 128-element linear segment with pitch of 0.25mm, ideally suited for pulse-echo US imaging, and two external 64-elements segments with concave geometry and 0.6mm pitch optimized for OA image acquisition. Interleaved OA and US image acquisition with up to 25 fps is facilitated through a custom-made multiplexer unit. Spatial resolution of the transducer was characterized in numerical simulations and validated in phantom experiments and comprises 230 and 300 μm in the respective OA and US imaging modes. Imaging performance of the multi-segment detector array was experimentally shown in a series of imaging sessions with healthy volunteers. Employing mixed array geometries allows at the same time achieving excellent OA contrast with a large field of view, and US contrast for complementary structural features with reduced side-lobes and improved resolution. The newly designed hybrid detector array that comprises segments of linear and concave geometries optimally fulfills requirements for efficient US and OA imaging and may expand the applicability of the developed hybrid OPUS imaging technology and accelerate its clinical translation.

  7. Antibody engineering reveals the important role of J segments in the production efficiency of llama single-domain antibodies in Saccharomyces cerevisiae.

    PubMed

    Gorlani, A; Hulsik, D Lutje; Adams, H; Vriend, G; Hermans, P; Verrips, T

    2012-01-01

    Variable domains of llama heavy-chain antibodies (VHH) are becoming a potent tool for a wide range of biotechnological and medical applications. Because of structural features typical of their single-domain nature, they are relatively easy to produce in lower eukaryotes, but it is not uncommon that some molecules have poor secretion efficiency. We therefore set out to study the production yield of VHH. We computationally identified five key residues that are crucial for folding and secretion, and we validated their importance with systematic site-directed mutations. The observation that all key residues were localised in the V segment, in proximity of the J segment of VHH, led us to study the importance of J segment in secretion efficiency. Intriguingly, we found that the use of specific J segments in VHH could strongly influence the production yield. Sequence analysis and expression experiments strongly suggested that interactions with chaperones, especially with the J segment, are a crucial aspect of the production yield of VHH.

  8. Breast tumor segmentation in DCE-MRI using fully convolutional networks with an application in radiogenomics

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Saha, Ashirbani; Zhu, Zhe; Mazurowski, Maciej A.

    2018-02-01

    Breast tumor segmentation based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) remains an active as well as a challenging problem. Previous studies often rely on manual annotation for tumor regions, which is not only time-consuming but also error-prone. Recent studies have shown high promise of deep learning-based methods in various segmentation problems. However, these methods are usually faced with the challenge of limited number (e.g., tens or hundreds) of medical images for training, leading to sub-optimal segmentation performance. Also, previous methods cannot efficiently deal with prevalent class-imbalance problems in tumor segmentation, where the number of voxels in tumor regions is much lower than that in the background area. To address these issues, in this study, we propose a mask-guided hierarchical learning (MHL) framework for breast tumor segmentation via fully convolutional networks (FCN). Our strategy is first decomposing the original difficult problem into several sub-problems and then solving these relatively simpler sub-problems in a hierarchical manner. To precisely identify locations of tumors that underwent a biopsy, we further propose an FCN model to detect two landmarks defined on nipples. Finally, based on both segmentation probability maps and our identified landmarks, we proposed to select biopsied tumors from all detected tumors via a tumor selection strategy using the pathology location. We validate our MHL method using data for 272 patients, and achieve a mean Dice similarity coefficient (DSC) of 0.72 in breast tumor segmentation. Finally, in a radiogenomic analysis, we show that a previously developed image features show a comparable performance for identifying luminal A subtype when applied to the automatic segmentation and a semi-manual segmentation demonstrating a high promise for fully automated radiogenomic analysis in breast cancer.

  9. Tandem catalysis for the preparation of cylindrical polypeptide brushes.

    PubMed

    Rhodes, Allison J; Deming, Timothy J

    2012-11-28

    Here, we report a method for synthesis of cylindrical copolypeptide brushes via N-carboxyanhydride (NCA) polymerization utilizing a new tandem catalysis approach that allows preparation of brushes with controlled segment lengths in a straightforward, one-pot procedure requiring no intermediate isolation or purification steps. To obtain high-density brush copolypeptides, we used a "grafting from" approach where alloc-α-aminoamide groups were installed onto the side chains of NCAs to serve as masked initiators. These groups were inert during cobalt-initiated NCA polymerization and gave allyloxycarbonyl-α-aminoamide-substituted polypeptide main chains. The alloc-α-aminoamide groups were then activated in situ using nickel to generate initiators for growth of side-chain brush segments. This use of stepwise tandem cobalt and nickel catalysis was found to be an efficient method for preparation of high-chain-density, cylindrical copolypeptide brushes, where both the main chains and side chains can be prepared with controlled segment lengths.

  10. Mass balance in the monitoring of pollutants in tidal rivers of the Guanabara Bay, Rio de Janeiro, Brazil.

    PubMed

    da Silveira, Raquel Pinhão; Rodrigues, Ana Paula de Castro; Santelli, Ricardo Erthal; Cordeiro, Renato Campello; Bidone, Edison Dausacker

    2011-10-01

    This study addressed the identification and monitoring of pollution sources of terrestrial origin in rivers (domestic sewage and industrial effluents) and critical fluvial segments in highly polluted environments under tidal influence (mixing marine and continental sources) from Guanabara Bay Basin, Rio de Janeiro, Brazil. The mass balance of contaminants was determined in conditions of continuous flow (low tide) during dry season (lower dilution capability). The results allowed the evaluation of the potential of contaminant mass generation by the different river segments and the estimation of their natural and anthropogenic components. The water quality of Iguaçú and Sarapuí Rivers were evaluated for metals and biochemical oxygen demand. The method gave an excellent response, including the possibility of sources identification and contaminated river segments ranking. The approach also offers fast execution and data interpretation, being highly efficient.

  11. Marginal Space Deep Learning: Efficient Architecture for Volumetric Image Parsing.

    PubMed

    Ghesu, Florin C; Krubasik, Edward; Georgescu, Bogdan; Singh, Vivek; Yefeng Zheng; Hornegger, Joachim; Comaniciu, Dorin

    2016-05-01

    Robust and fast solutions for anatomical object detection and segmentation support the entire clinical workflow from diagnosis, patient stratification, therapy planning, intervention and follow-up. Current state-of-the-art techniques for parsing volumetric medical image data are typically based on machine learning methods that exploit large annotated image databases. Two main challenges need to be addressed, these are the efficiency in scanning high-dimensional parametric spaces and the need for representative image features which require significant efforts of manual engineering. We propose a pipeline for object detection and segmentation in the context of volumetric image parsing, solving a two-step learning problem: anatomical pose estimation and boundary delineation. For this task we introduce Marginal Space Deep Learning (MSDL), a novel framework exploiting both the strengths of efficient object parametrization in hierarchical marginal spaces and the automated feature design of Deep Learning (DL) network architectures. In the 3D context, the application of deep learning systems is limited by the very high complexity of the parametrization. More specifically 9 parameters are necessary to describe a restricted affine transformation in 3D, resulting in a prohibitive amount of billions of scanning hypotheses. The mechanism of marginal space learning provides excellent run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality. To further increase computational efficiency and robustness, in our system we learn sparse adaptive data sampling patterns that automatically capture the structure of the input. Given the object localization, we propose a DL-based active shape model to estimate the non-rigid object boundary. Experimental results are presented on the aortic valve in ultrasound using an extensive dataset of 2891 volumes from 869 patients, showing significant improvements of up to 45.2% over the state-of-the-art. To our knowledge, this is the first successful demonstration of the DL potential to detection and segmentation in full 3D data with parametrized representations.

  12. Electro Optical Properties of Copolymer Blends: Lasing, Electroluminescence and Photophysics

    DTIC Science & Technology

    2006-04-15

    conjugated main chain structures with high photoluminescent and electroluminescent quantum yields. The structures incorporated fluorene containing moieties...The systems studied focused on novel conjugated main chain structures with high photoluminescent and electroluminescent quantum yields. The structures...the quantum efficient fluorine group. The properties of segmented copolymers that incorporate fluorenes were compared to the homo-PPV type systems

  13. Power generation from nanostructured PbTe-based thermoelectrics: comprehensive development from materials to modules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Xiaokai; Jood, Priyanka; Ohta, Michihiro

    2016-01-01

    In this work, we demonstrate the use of high performance nanostructured PbTe-based materials in high conversion efficiency thermoelectric modules. We fabricated the samples of PbTe-2% MgTe doped with 4% Na and PbTe doped with 0.2% PbI2 with high thermoelectric figure of merit (ZT) and sintered them with Co-Fe diffusion barriers for use as p- and n-type thermoelectric legs, respectively. Transmission electron microscopy of the PbTe legs reveals two shapes of nanostructures, disk-like and spherical. The reduction in lattice thermal conductivity through nanostructuring gives a ZT of similar to 1.8 at 810 K for p-type PbTe and similar to 1.4 atmore » 750 K for n-type PbTe. Nanostructured PbTe-based module and segmented-leg module using Bi2Te3 and nanostructured PbTe were fabricated and tested with hot-side temperatures up to 873 K in a vacuum. The maximum conversion efficiency of similar to 8.8% for a temperature difference (Delta T) of 570 K and B11% for a Delta T of 590 K have been demonstrated in the nanostructured PbTe-based module and segmented Bi2Te3/nanostructured PbTe module, respectively. Three-dimensional finite-element simulations predict that the maximum conversion efficiency of the nanostructured PbTe-based module and segmented Bi2Te3/nanostructured PbTe module reaches 12.2% for a Delta T of 570 K and 15.6% for a Delta T of 590 K respectively, which could be achieved if the electrical and thermal contact between the nanostructured PbTe legs and Cu interconnecting electrodes is further improved.« less

  14. Multiple Active Contours Guided by Differential Evolution for Medical Image Segmentation

    PubMed Central

    Cruz-Aceves, I.; Avina-Cervantes, J. G.; Lopez-Hernandez, J. M.; Rostro-Gonzalez, H.; Garcia-Capulin, C. H.; Torres-Cisneros, M.; Guzman-Cabrera, R.

    2013-01-01

    This paper presents a new image segmentation method based on multiple active contours guided by differential evolution, called MACDE. The segmentation method uses differential evolution over a polar coordinate system to increase the exploration and exploitation capabilities regarding the classical active contour model. To evaluate the performance of the proposed method, a set of synthetic images with complex objects, Gaussian noise, and deep concavities is introduced. Subsequently, MACDE is applied on datasets of sequential computed tomography and magnetic resonance images which contain the human heart and the human left ventricle, respectively. Finally, to obtain a quantitative and qualitative evaluation of the medical image segmentations compared to regions outlined by experts, a set of distance and similarity metrics has been adopted. According to the experimental results, MACDE outperforms the classical active contour model and the interactive Tseng method in terms of efficiency and robustness for obtaining the optimal control points and attains a high accuracy segmentation. PMID:23983809

  15. Lung segmentation from HRCT using united geometric active contours

    NASA Astrophysics Data System (ADS)

    Liu, Junwei; Li, Chuanfu; Xiong, Jin; Feng, Huanqing

    2007-12-01

    Accurate lung segmentation from high resolution CT images is a challenging task due to various detail tracheal structures, missing boundary segments and complex lung anatomy. One popular method is based on gray-level threshold, however its results are usually rough. A united geometric active contours model based on level set is proposed for lung segmentation in this paper. Particularly, this method combines local boundary information and region statistical-based model synchronously: 1) Boundary term ensures the integrality of lung tissue.2) Region term makes the level set function evolve with global characteristic and independent on initial settings. A penalizing energy term is introduced into the model, which forces the level set function evolving without re-initialization. The method is found to be much more efficient in lung segmentation than other methods that are only based on boundary or region. Results are shown by 3D lung surface reconstruction, which indicates that the method will play an important role in the design of computer-aided diagnostic (CAD) system.

  16. SEGMA: An Automatic SEGMentation Approach for Human Brain MRI Using Sliding Window and Random Forests

    PubMed Central

    Serag, Ahmed; Wilkinson, Alastair G.; Telford, Emma J.; Pataky, Rozalia; Sparrow, Sarah A.; Anblagan, Devasuda; Macnaught, Gillian; Semple, Scott I.; Boardman, James P.

    2017-01-01

    Quantitative volumes from brain magnetic resonance imaging (MRI) acquired across the life course may be useful for investigating long term effects of risk and resilience factors for brain development and healthy aging, and for understanding early life determinants of adult brain structure. Therefore, there is an increasing need for automated segmentation tools that can be applied to images acquired at different life stages. We developed an automatic segmentation method for human brain MRI, where a sliding window approach and a multi-class random forest classifier were applied to high-dimensional feature vectors for accurate segmentation. The method performed well on brain MRI data acquired from 179 individuals, analyzed in three age groups: newborns (38–42 weeks gestational age), children and adolescents (4–17 years) and adults (35–71 years). As the method can learn from partially labeled datasets, it can be used to segment large-scale datasets efficiently. It could also be applied to different populations and imaging modalities across the life course. PMID:28163680

  17. Segmentation of optic disc and optic cup in retinal fundus images using shape regression.

    PubMed

    Sedai, Suman; Roy, Pallab K; Mahapatra, Dwarikanath; Garnavi, Rahil

    2016-08-01

    Glaucoma is one of the leading cause of blindness. The manual examination of optic cup and disc is a standard procedure used for detecting glaucoma. This paper presents a fully automatic regression based method which accurately segments optic cup and disc in retinal colour fundus image. First, we roughly segment optic disc using circular hough transform. The approximated optic disc is then used to compute the initial optic disc and cup shapes. We propose a robust and efficient cascaded shape regression method which iteratively learns the final shape of the optic cup and disc from a given initial shape. Gradient boosted regression trees are employed to learn each regressor in the cascade. A novel data augmentation approach is proposed to improve the regressors performance by generating synthetic training data. The proposed optic cup and disc segmentation method is applied on an image set of 50 patients and demonstrate high segmentation accuracy for optic cup and disc with dice metric of 0.95 and 0.85 respectively. Comparative study shows that our proposed method outperforms state of the art optic cup and disc segmentation methods.

  18. A superpixel-based framework for automatic tumor segmentation on breast DCE-MRI

    NASA Astrophysics Data System (ADS)

    Yu, Ning; Wu, Jia; Weinstein, Susan P.; Gaonkar, Bilwaj; Keller, Brad M.; Ashraf, Ahmed B.; Jiang, YunQing; Davatzikos, Christos; Conant, Emily F.; Kontos, Despina

    2015-03-01

    Accurate and efficient automated tumor segmentation in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is highly desirable for computer-aided tumor diagnosis. We propose a novel automatic segmentation framework which incorporates mean-shift smoothing, superpixel-wise classification, pixel-wise graph-cuts partitioning, and morphological refinement. A set of 15 breast DCE-MR images, obtained from the American College of Radiology Imaging Network (ACRIN) 6657 I-SPY trial, were manually segmented to generate tumor masks (as ground truth) and breast masks (as regions of interest). Four state-of-the-art segmentation approaches based on diverse models were also utilized for comparison. Based on five standard evaluation metrics for segmentation, the proposed framework consistently outperformed all other approaches. The performance of the proposed framework was: 1) 0.83 for Dice similarity coefficient, 2) 0.96 for pixel-wise accuracy, 3) 0.72 for VOC score, 4) 0.79 mm for mean absolute difference, and 5) 11.71 mm for maximum Hausdorff distance, which surpassed the second best method (i.e., adaptive geodesic transformation), a semi-automatic algorithm depending on precise initialization. Our results suggest promising potential applications of our segmentation framework in assisting analysis of breast carcinomas.

  19. RNase non-sensitive and endocytosis independent siRNA delivery system: delivery of siRNA into tumor cells and high efficiency induction of apoptosis

    NASA Astrophysics Data System (ADS)

    Jiang, Xinglu; Wang, Guobao; Liu, Ru; Wang, Yaling; Wang, Yongkui; Qiu, Xiaozhong; Gao, Xueyun

    2013-07-01

    To date, RNase degradation and endosome/lysosome trapping are still serious problems for siRNA-based molecular therapy, although different kinds of delivery formulations have been tried. In this report, a cell penetrating peptide (CPP, including a positively charged segment, a linear segment, and a hydrophobic segment) and a single wall carbon nanotube (SWCNT) are applied together by a simple method to act as a siRNA delivery system. The siRNAs first form a complex with the positively charged segment of CPP via electrostatic forces, and the siRNA-CPP further coats the surface of the SWCNT via hydrophobic interactions. This siRNA delivery system is non-sensitive to RNase and can avoid endosome/lysosome trapping in vitro. When this siRNA delivery system is studied in Hela cells, siRNA uptake was observed in 98% Hela cells, and over 70% mRNA of mammalian target of rapamycin (mTOR) is knocked down, triggering cell apoptosis on a significant scale. Our siRNA delivery system is easy to handle and benign to cultured cells, providing a very efficient approach for the delivery of siRNA into the cell cytosol and cleaving the target mRNA therein.

  20. Antennas for mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Huang, John

    1991-01-01

    A NASA sponsored program, called the Mobile Satellite (MSAT) system, has prompted the development of several innovative antennas at L-band frequencies. In the space segment of the MSAT system, an efficient, light weight, circularly polarized microstrip array that uses linearly polarized elements was developed as a multiple beam reflector feed system. In the ground segment, a low-cost, low-profile, and very efficient microstrip Yagi array was developed as a medium-gain mechanically steered vehicle antenna. Circularly shaped microstrip patches excited at higher-order modes were also developed as low-gain vehicle antennas. A more recent effort called for the development of a 20/30 GHz mobile terminal antenna for future-generation mobile satellite communications. To combat the high insertion loss encountered at 20/30 GHz, series-fed Monolithic Microwave Integrated Circuit (MMIC) microstrip array antennas are currently being developed. These MMIC arrays may lead to the development of several small but high-gain Ka-band antennas for the Personal Access Satellite Service planned for the 2000s.

  1. Seed robustness of oriented relative fuzzy connectedness: core computation and its applications

    NASA Astrophysics Data System (ADS)

    Tavares, Anderson C. M.; Bejar, Hans H. C.; Miranda, Paulo A. V.

    2017-02-01

    In this work, we present a formal definition and an efficient algorithm to compute the cores of Oriented Relative Fuzzy Connectedness (ORFC), a recent seed-based segmentation technique. The core is a region where the seed can be moved without altering the segmentation, an important aspect for robust techniques and reduction of user effort. We show how ORFC cores can be used to build a powerful hybrid image segmentation approach. We also provide some new theoretical relations between ORFC and Oriented Image Foresting Transform (OIFT), as well as their cores. Experimental results among several methods show that the hybrid approach conserves high accuracy, avoids the shrinking problem and provides robustness to seed placement inside the desired object due to the cores properties.

  2. News video story segmentation method using fusion of audio-visual features

    NASA Astrophysics Data System (ADS)

    Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang

    2007-11-01

    News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.

  3. Semantic Segmentation of Forest Stands of Pure Species as a Global Optimization Problem

    NASA Astrophysics Data System (ADS)

    Dechesne, C.; Mallet, C.; Le Bris, A.; Gouet-Brunet, V.

    2017-05-01

    Forest stand delineation is a fundamental task for forest management purposes, that is still mainly manually performed through visual inspection of geospatial (very) high spatial resolution images. Stand detection has been barely addressed in the literature which has mainly focused, in forested environments, on individual tree extraction and tree species classification. From a methodological point of view, stand detection can be considered as a semantic segmentation problem. It offers two advantages. First, one can retrieve the dominant tree species per segment. Secondly, one can benefit from existing low-level tree species label maps from the literature as a basis for high-level object extraction. Thus, the semantic segmentation issue becomes a regularization issue in a weakly structured environment and can be formulated in an energetical framework. This papers aims at investigating which regularization strategies of the literature are the most adapted to delineate and classify forest stands of pure species. Both airborne lidar point clouds and multispectral very high spatial resolution images are integrated for that purpose. The local methods (such as filtering and probabilistic relaxation) are not adapted for such problem since the increase of the classification accuracy is below 5%. The global methods, based on an energy model, tend to be more efficient with an accuracy gain up to 15%. The segmentation results using such models have an accuracy ranging from 96% to 99%.

  4. A scalable method to improve gray matter segmentation at ultra high field MRI.

    PubMed

    Gulban, Omer Faruk; Schneider, Marian; Marquardt, Ingo; Haast, Roy A M; De Martino, Federico

    2018-01-01

    High-resolution (functional) magnetic resonance imaging (MRI) at ultra high magnetic fields (7 Tesla and above) enables researchers to study how anatomical and functional properties change within the cortical ribbon, along surfaces and across cortical depths. These studies require an accurate delineation of the gray matter ribbon, which often suffers from inclusion of blood vessels, dura mater and other non-brain tissue. Residual segmentation errors are commonly corrected by browsing the data slice-by-slice and manually changing labels. This task becomes increasingly laborious and prone to error at higher resolutions since both work and error scale with the number of voxels. Here we show that many mislabeled, non-brain voxels can be corrected more efficiently and semi-automatically by representing three-dimensional anatomical images using two-dimensional histograms. We propose both a uni-modal (based on first spatial derivative) and multi-modal (based on compositional data analysis) approach to this representation and quantify the benefits in 7 Tesla MRI data of nine volunteers. We present an openly accessible Python implementation of these approaches and demonstrate that editing cortical segmentations using two-dimensional histogram representations as an additional post-processing step aids existing algorithms and yields improved gray matter borders. By making our data and corresponding expert (ground truth) segmentations openly available, we facilitate future efforts to develop and test segmentation algorithms on this challenging type of data.

  5. A scalable method to improve gray matter segmentation at ultra high field MRI

    PubMed Central

    De Martino, Federico

    2018-01-01

    High-resolution (functional) magnetic resonance imaging (MRI) at ultra high magnetic fields (7 Tesla and above) enables researchers to study how anatomical and functional properties change within the cortical ribbon, along surfaces and across cortical depths. These studies require an accurate delineation of the gray matter ribbon, which often suffers from inclusion of blood vessels, dura mater and other non-brain tissue. Residual segmentation errors are commonly corrected by browsing the data slice-by-slice and manually changing labels. This task becomes increasingly laborious and prone to error at higher resolutions since both work and error scale with the number of voxels. Here we show that many mislabeled, non-brain voxels can be corrected more efficiently and semi-automatically by representing three-dimensional anatomical images using two-dimensional histograms. We propose both a uni-modal (based on first spatial derivative) and multi-modal (based on compositional data analysis) approach to this representation and quantify the benefits in 7 Tesla MRI data of nine volunteers. We present an openly accessible Python implementation of these approaches and demonstrate that editing cortical segmentations using two-dimensional histogram representations as an additional post-processing step aids existing algorithms and yields improved gray matter borders. By making our data and corresponding expert (ground truth) segmentations openly available, we facilitate future efforts to develop and test segmentation algorithms on this challenging type of data. PMID:29874295

  6. Segmentation of histological images and fibrosis identification with a convolutional neural network.

    PubMed

    Fu, Xiaohang; Liu, Tong; Xiong, Zhaohan; Smaill, Bruce H; Stiles, Martin K; Zhao, Jichao

    2018-07-01

    Segmentation of histological images is one of the most crucial tasks for many biomedical analyses involving quantification of certain tissue types, such as fibrosis via Masson's trichrome staining. However, challenges are posed by the high variability and complexity of structural features in such images, in addition to imaging artifacts. Further, the conventional approach of manual thresholding is labor-intensive, and highly sensitive to inter- and intra-image intensity variations. An accurate and robust automated segmentation method is of high interest. We propose and evaluate an elegant convolutional neural network (CNN) designed for segmentation of histological images, particularly those with Masson's trichrome stain. The network comprises 11 successive convolutional - rectified linear unit - batch normalization layers. It outperformed state-of-the-art CNNs on a dataset of cardiac histological images (labeling fibrosis, myocytes, and background) with a Dice similarity coefficient of 0.947. With 100 times fewer (only 300,000) trainable parameters than the state-of-the-art, our CNN is less susceptible to overfitting, and is efficient. Additionally, it retains image resolution from input to output, captures fine-grained details, and can be trained end-to-end smoothly. To the best of our knowledge, this is the first deep CNN tailored to the problem of concern, and may potentially be extended to solve similar segmentation tasks to facilitate investigations into pathology and clinical treatment. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Cell segmentation in time-lapse fluorescence microscopy with temporally varying sub-cellular fusion protein patterns.

    PubMed

    Bunyak, Filiz; Palaniappan, Kannappan; Chagin, Vadim; Cardoso, M

    2009-01-01

    Fluorescently tagged proteins such as GFP-PCNA produce rich dynamically varying textural patterns of foci distributed in the nucleus. This enables the behavioral study of sub-cellular structures during different phases of the cell cycle. The varying punctuate patterns of fluorescence, drastic changes in SNR, shape and position during mitosis and abundance of touching cells, however, require more sophisticated algorithms for reliable automatic cell segmentation and lineage analysis. Since the cell nuclei are non-uniform in appearance, a distribution-based modeling of foreground classes is essential. The recently proposed graph partitioning active contours (GPAC) algorithm supports region descriptors and flexible distance metrics. We extend GPAC for fluorescence-based cell segmentation using regional density functions and dramatically improve its efficiency for segmentation from O(N(4)) to O(N(2)), for an image with N(2) pixels, making it practical and scalable for high throughput microscopy imaging studies.

  8. Concentrated Solar Thermoelectric Power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Gang; Ren, Zhifeng

    2015-07-09

    The goal of this project is to demonstrate in the lab that solar thermoelectric generators (STEGs) can exceed 10% solar-to-electricity efficiency, and STEGs can be integrated with phase-change materials (PCM) for thermal storage, providing operation beyond daylight hours. This project achieved significant progress in many tasks necessary to achieving the overall project goals. An accurate Themoelectric Generator (TEG) model was developed, which included realistic treatment of contact materials, contact resistances and radiative losses. In terms of fabricating physical TEGs, high performance contact materials for skutterudite TE segments were developed, along with brazing and soldering methods to assemble segmented TEGs. Accuratemore » measurement systems for determining device performance (in addition to just TE material performance) were built for this project and used to characterize our TEGs. From the optical components’ side, a spectrally selective cermet surface was developed with high solar absorptance and low thermal emittance, with thermal stability at high temperature. A measurement technique was also developed to determine absorptance and total hemispherical emittance at high temperature, and was used to characterize the fabricated spectrally selective surfaces. In addition, a novel reflective cavity was designed to reduce radiative absorber losses and achieve high receiver efficiency at low concentration ratios. A prototype cavity demonstrated that large reductions in radiative losses were possible through this technique. For the overall concentrating STEG system, a number of devices were fabricated and tested in a custom built test platform to characterize their efficiency performance. Additionally, testing was performed with integration of PCM thermal storage, and the storage time of the lab scale system was evaluated. Our latest testing results showed a STEG efficiency of 9.6%, indicating promising potential for high performance concentrated STEGs.« less

  9. Destination-directed, packet-switched architecture for a geostationary communications satellite network

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Shalkhauser, Mary JO; Bobinsky, Eric A.; Soni, Nitin J.; Quintana, Jorge A.; Kim, Heechul; Wager, Paul; Vanderaar, Mark

    1993-01-01

    A major goal of the Digital Systems Technology Branch at the NASA Lewis Research Center is to identify and develop critical digital components and technologies that either enable new commercial missions or significantly enhance the performance, cost efficiency, and/or reliability of existing and planned space communications systems. NASA envisions a need for low-data-rate, interactive, direct-to-the-user communications services for data, voice, facsimile, and video conferencing. The network would provide enhanced very-small-aperture terminal (VSAT) communications services and be capable of handling data rates of 64 kbps through 2.048 Mbps in 64-kbps increments. Efforts have concentrated heavily on the space segment; however, the ground segment has been considered concurrently to ensure cost efficiency and realistic operational constraints. The focus of current space segment developments is a flexible, high-throughput, fault-tolerant onboard information-switching processor (ISP) for a geostationary satellite communications network. The Digital Systems Technology Branch is investigating both circuit and packet architectures for the ISP. Destination-directed, packet-switched architectures for geostationary communications satellites are addressed.

  10. New second order Mumford-Shah model based on Γ-convergence approximation for image processing

    NASA Astrophysics Data System (ADS)

    Duan, Jinming; Lu, Wenqi; Pan, Zhenkuan; Bai, Li

    2016-05-01

    In this paper, a second order variational model named the Mumford-Shah total generalized variation (MSTGV) is proposed for simultaneously image denoising and segmentation, which combines the original Γ-convergence approximated Mumford-Shah model with the second order total generalized variation (TGV). For image denoising, the proposed MSTGV can eliminate both the staircase artefact associated with the first order total variation and the edge blurring effect associated with the quadratic H1 regularization or the second order bounded Hessian regularization. For image segmentation, the MSTGV can obtain clear and continuous boundaries of objects in the image. To improve computational efficiency, the implementation of the MSTGV does not directly solve its high order nonlinear partial differential equations and instead exploits the efficient split Bregman algorithm. The algorithm benefits from the fast Fourier transform, analytical generalized soft thresholding equation, and Gauss-Seidel iteration. Extensive experiments are conducted to demonstrate the effectiveness and efficiency of the proposed model.

  11. A fuzzy feature fusion method for auto-segmentation of gliomas with multi-modality diffusion and perfusion magnetic resonance images in radiotherapy.

    PubMed

    Guo, Lu; Wang, Ping; Sun, Ranran; Yang, Chengwen; Zhang, Ning; Guo, Yu; Feng, Yuanming

    2018-02-19

    The diffusion and perfusion magnetic resonance (MR) images can provide functional information about tumour and enable more sensitive detection of the tumour extent. We aimed to develop a fuzzy feature fusion method for auto-segmentation of gliomas in radiotherapy planning using multi-parametric functional MR images including apparent diffusion coefficient (ADC), fractional anisotropy (FA) and relative cerebral blood volume (rCBV). For each functional modality, one histogram-based fuzzy model was created to transform image volume into a fuzzy feature space. Based on the fuzzy fusion result of the three fuzzy feature spaces, regions with high possibility belonging to tumour were generated automatically. The auto-segmentations of tumour in structural MR images were added in final auto-segmented gross tumour volume (GTV). For evaluation, one radiation oncologist delineated GTVs for nine patients with all modalities. Comparisons between manually delineated and auto-segmented GTVs showed that, the mean volume difference was 8.69% (±5.62%); the mean Dice's similarity coefficient (DSC) was 0.88 (±0.02); the mean sensitivity and specificity of auto-segmentation was 0.87 (±0.04) and 0.98 (±0.01) respectively. High accuracy and efficiency can be achieved with the new method, which shows potential of utilizing functional multi-parametric MR images for target definition in precision radiation treatment planning for patients with gliomas.

  12. Automated Segmentation of Nuclei in Breast Cancer Histopathology Images.

    PubMed

    Paramanandam, Maqlin; O'Byrne, Michael; Ghosh, Bidisha; Mammen, Joy John; Manipadam, Marie Therese; Thamburaj, Robinson; Pakrashi, Vikram

    2016-01-01

    The process of Nuclei detection in high-grade breast cancer images is quite challenging in the case of image processing techniques due to certain heterogeneous characteristics of cancer nuclei such as enlarged and irregularly shaped nuclei, highly coarse chromatin marginalized to the nuclei periphery and visible nucleoli. Recent reviews state that existing techniques show appreciable segmentation accuracy on breast histopathology images whose nuclei are dispersed and regular in texture and shape; however, typical cancer nuclei are often clustered and have irregular texture and shape properties. This paper proposes a novel segmentation algorithm for detecting individual nuclei from Hematoxylin and Eosin (H&E) stained breast histopathology images. This detection framework estimates a nuclei saliency map using tensor voting followed by boundary extraction of the nuclei on the saliency map using a Loopy Back Propagation (LBP) algorithm on a Markov Random Field (MRF). The method was tested on both whole-slide images and frames of breast cancer histopathology images. Experimental results demonstrate high segmentation performance with efficient precision, recall and dice-coefficient rates, upon testing high-grade breast cancer images containing several thousand nuclei. In addition to the optimal performance on the highly complex images presented in this paper, this method also gave appreciable results in comparison with two recently published methods-Wienert et al. (2012) and Veta et al. (2013), which were tested using their own datasets.

  13. Automated Segmentation of Nuclei in Breast Cancer Histopathology Images

    PubMed Central

    Paramanandam, Maqlin; O’Byrne, Michael; Ghosh, Bidisha; Mammen, Joy John; Manipadam, Marie Therese; Thamburaj, Robinson; Pakrashi, Vikram

    2016-01-01

    The process of Nuclei detection in high-grade breast cancer images is quite challenging in the case of image processing techniques due to certain heterogeneous characteristics of cancer nuclei such as enlarged and irregularly shaped nuclei, highly coarse chromatin marginalized to the nuclei periphery and visible nucleoli. Recent reviews state that existing techniques show appreciable segmentation accuracy on breast histopathology images whose nuclei are dispersed and regular in texture and shape; however, typical cancer nuclei are often clustered and have irregular texture and shape properties. This paper proposes a novel segmentation algorithm for detecting individual nuclei from Hematoxylin and Eosin (H&E) stained breast histopathology images. This detection framework estimates a nuclei saliency map using tensor voting followed by boundary extraction of the nuclei on the saliency map using a Loopy Back Propagation (LBP) algorithm on a Markov Random Field (MRF). The method was tested on both whole-slide images and frames of breast cancer histopathology images. Experimental results demonstrate high segmentation performance with efficient precision, recall and dice-coefficient rates, upon testing high-grade breast cancer images containing several thousand nuclei. In addition to the optimal performance on the highly complex images presented in this paper, this method also gave appreciable results in comparison with two recently published methods—Wienert et al. (2012) and Veta et al. (2013), which were tested using their own datasets. PMID:27649496

  14. Synthetic biology approach for plant protection using dsRNA.

    PubMed

    Niehl, Annette; Soininen, Marjukka; Poranen, Minna M; Heinlein, Manfred

    2018-02-26

    Pathogens induce severe damages on cultivated plants and represent a serious threat to global food security. Emerging strategies for crop protection involve the external treatment of plants with double-stranded (ds)RNA to trigger RNA interference. However, applying this technology in greenhouses and fields depends on dsRNA quality, stability and efficient large-scale production. Using components of the bacteriophage phi6, we engineered a stable and accurate in vivo dsRNA production system in Pseudomonas syringae bacteria. Unlike other in vitro or in vivo dsRNA production systems that rely on DNA transcription and postsynthetic alignment of single-stranded RNA molecules, the phi6 system is based on the replication of dsRNA by an RNA-dependent RNA polymerase, thus allowing production of high-quality, long dsRNA molecules. The phi6 replication complex was reprogrammed to multiply dsRNA sequences homologous to tobacco mosaic virus (TMV) by replacing the coding regions within two of the three phi6 genome segments with TMV sequences and introduction of these constructs into P. syringae together with the third phi6 segment, which encodes the components of the phi6 replication complex. The stable production of TMV dsRNA was achieved by combining all the three phi6 genome segments and by maintaining the natural dsRNA sizes and sequence elements required for efficient replication and packaging of the segments. The produced TMV-derived dsRNAs inhibited TMV propagation when applied to infected Nicotiana benthamiana plants. The established dsRNA production system enables the broad application of dsRNA molecules as an efficient, highly flexible, nontransgenic and environmentally friendly approach for protecting crops against viruses and other pathogens. © 2018 The Authors. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.

  15. Metrology requirements for the serial production of ELT primary mirror segments

    NASA Astrophysics Data System (ADS)

    Rees, Paul C. T.; Gray, Caroline

    2015-08-01

    The manufacture of the next generation of large astronomical telescopes, the extremely large telescopes (ELT), requires the rapid manufacture of greater than 500 1.44m hexagonal segments for the primary mirror of each telescope. Both leading projects, the Thirty Meter Telescope (TMT) and the European Extremely Large Telescope (E-ELT), have set highly demanding technical requirements for each fabricated segment. These technical requirements, when combined with the anticipated construction schedule for each telescope, suggest that more than one optical fabricator will be involved in the delivery of the primary mirror segments in order to meet the project schedule. For one supplier, the technical specification is challenging and requires highly consistent control of metrology in close coordination with the polishing technologies used in order to optimize production rates. For production using multiple suppliers, however the supply chain is structured, consistent control of metrology along the supply chain will be required. This requires a broader pattern of independent verification than is the case of a single supplier. This paper outlines the metrology requirements for a single supplier throughout all stages of the fabrication process. We identify and outline those areas where metrology accuracy and duration have a significant impact on production efficiency. We use the challenging ESO E-ELT technical specification as an example of our treatment, including actual process data. We further develop this model for the case of a supply chain consisting of multiple suppliers. Here, we emphasize the need to control metrology throughout the supply chain in order to optimize net production efficiency.

  16. Random fiber lasers based on artificially controlled backscattering fibers

    NASA Astrophysics Data System (ADS)

    Chen, Daru; Wang, Xiaoliang; She, Lijuan; Qiang, Zexuan; Yu, Zhangwei

    2017-10-01

    The random fiber laser (RFL) which is a milestone in laser physics and nonlinear optics, has attracted considerable attention recently. Most previous RFLs are based on distributed feedback of Rayleigh scattering amplified through stimulated Raman/Brillouin scattering effect in single mode fibers, which required long-distance (tens of kilometers) single mode fibers and high threshold up to watt-level due to the extremely small Rayleigh scattering coefficient of the fiber. We proposed and demonstrated a half-open cavity RFL based on a segment of a artificially controlled backscattering SMF(ACB-SMF) with a length of 210m, 310m or 390m. A fiber Bragg grating with the central wavelength of 1530nm and a segment of ACB-SMF forms the half-open cavity. The proposed RFL achieves the threshold of 25mW, 30mW and 30mW, respectively. Random lasing at the wavelength of 1530nm and the extinction ratio of 50dB is achieved when a segment of 5m EDF is pumped by a 980nm LD in the RFL. Another half-open cavity RFL based on a segment of a artificially controlled backscattering EDF(ACBS-EDF) is also demonstrated without an ACB-SMF. The 3m ACB-EDF is fabricated by using the femtosecond laser with pulse energy of 0.34mJ which introduces about 50 reflectors in the EDF. Random lasing at the wavelength of 1530nm is achieved with the output power of 7.5mW and the efficiency of 1.88%. Two novel RFLs with much short cavities have been achieved with low threshold and high efficiency.

  17. The design of a linear L-band high power amplifier for mobile communication satellites

    NASA Technical Reports Server (NTRS)

    Whittaker, N.; Brassard, G.; Li, E.; Goux, P.

    1990-01-01

    A linear L-band solid state high power amplifier designed for the space segment of the Mobile Satellite (MSAT) mobile communication system is described. The amplifier is capable of producing 35 watts of RF power with multitone signal at an efficiency of 25 percent and with intermodulation products better than 16 dB below carrier.

  18. A hairy-leaf gene, BLANKET LEAF, of wild Oryza nivara increases photosynthetic water use efficiency in rice.

    PubMed

    Hamaoka, Norimitsu; Yasui, Hideshi; Yamagata, Yoshiyuki; Inoue, Yoko; Furuya, Naruto; Araki, Takuya; Ueno, Osamu; Yoshimura, Atsushi

    2017-12-01

    High water use efficiency is essential to water-saving cropping. Morphological traits that affect photosynthetic water use efficiency are not well known. We examined whether leaf hairiness improves photosynthetic water use efficiency in rice. A chromosome segment introgression line (IL-hairy) of wild Oryza nivara (Acc. IRGC105715) with the genetic background of Oryza sativa cultivar 'IR24' had high leaf pubescence (hair). The leaf hairs developed along small vascular bundles. Linkage analysis in BC 5 F 2 and F 3 populations showed that the trait was governed by a single gene, designated BLANKET LEAF (BKL), on chromosome 6. IL-hairy plants had a warmer leaf surface in sunlight, probably due to increased boundary layer resistance. They had a lower transpiration rate under moderate and high light intensities, resulting in higher photosynthetic water use efficiency. Introgression of BKL on chromosome 6 from O. nivara improved photosynthetic water use efficiency in the genetic background of IR24.

  19. High Efficiency Thermoelectric Radioisotope Power Systems

    NASA Technical Reports Server (NTRS)

    El-Genk, Mohamed; Saber, Hamed; Caillat, Thierry

    2004-01-01

    The work performed and whose results presented in this report is a joint effort between the University of New Mexico s Institute for Space and Nuclear Power Studies (ISNPS) and the Jet Propulsion Laboratory (JPL), California Institute of Technology. In addition to the development, design, and fabrication of skutterudites and skutterudites-based segmented unicouples this effort included conducting performance tests of these unicouples for hundreds of hours to verify theoretical predictions of the conversion efficiency. The performance predictions of these unicouples are obtained using 1-D and 3-D models developed for that purpose and for estimating the actual performance and side heat losses in the tests conducted at ISNPS. In addition to the performance tests, the development of the 1-D and 3-D models and the development of Advanced Radioisotope Power systems for Beginning-Of-Life (BOM) power of 108 We are carried out at ISNPS. The materials synthesis and fabrication of the unicouples are carried out at JPL. The research conducted at ISNPS is documented in chapters 2-5 and that conducted at JP, in documented in chapter 5. An important consideration in the design and optimization of segmented thermoelectric unicouples (STUs) is determining the relative lengths, cross-section areas, and the interfacial temperatures of the segments of the different materials in the n- and p-legs. These variables are determined using a genetic algorithm (GA) in conjunction with one-dimensional analytical model of STUs that is developed in chapter 2. Results indicated that when optimized for maximum conversion efficiency, the interfacial temperatures between various segments in a STU are close to those at the intersections of the Figure-Of-Merit (FOM), ZT, curves of the thermoelectric materials of the adjacent segments. When optimizing the STUs for maximum electrical power density, however, the interfacial temperatures are different from those at the intersections of the ZT curves, but close to those at the intersections the characteristic power, CP, curves of the thermoelectric materials of the adjacent segments (CP = T(sup 2)Zk and has a unit of W/m). Results also showed that the number of the segments in the n- and p-legs of the STUs optimized for maximum power density are generally fewer than when the same unicouples are optimized for maximum efficiency. These results are obtained using the 1-D optimization model of STUs that is detailed in chapter 2. A three-dimensional model of STUs is developed and incorporated into the ANSYS commercial software (chapter 3). The governing equations are solved, subject to the prescribed

  20. Effective user guidance in online interactive semantic segmentation

    NASA Astrophysics Data System (ADS)

    Petersen, Jens; Bendszus, Martin; Debus, Jürgen; Heiland, Sabine; Maier-Hein, Klaus H.

    2017-03-01

    With the recent success of machine learning based solutions for automatic image parsing, the availability of reference image annotations for algorithm training is one of the major bottlenecks in medical image segmentation. We are interested in interactive semantic segmentation methods that can be used in an online fashion to generate expert segmentations. These can be used to train automated segmentation techniques or, from an application perspective, for quick and accurate tumor progression monitoring. Using simulated user interactions in a MRI glioblastoma segmentation task, we show that if the user possesses knowledge of the correct segmentation it is significantly (p <= 0.009) better to present data and current segmentation to the user in such a manner that they can easily identify falsely classified regions compared to guiding the user to regions where the classifier exhibits high uncertainty, resulting in differences of mean Dice scores between +0.070 (Whole tumor) and +0.136 (Tumor Core) after 20 iterations. The annotation process should cover all classes equally, which results in a significant (p <= 0.002) improvement compared to completely random annotations anywhere in falsely classified regions for small tumor regions such as the necrotic tumor core (mean Dice +0.151 after 20 it.) and non-enhancing abnormalities (mean Dice +0.069 after 20 it.). These findings provide important insights for the development of efficient interactive segmentation systems and user interfaces.

  1. A human visual based binarization technique for histological images

    NASA Astrophysics Data System (ADS)

    Shreyas, Kamath K. M.; Rajendran, Rahul; Panetta, Karen; Agaian, Sos

    2017-05-01

    In the field of vision-based systems for object detection and classification, thresholding is a key pre-processing step. Thresholding is a well-known technique for image segmentation. Segmentation of medical images, such as Computed Axial Tomography (CAT), Magnetic Resonance Imaging (MRI), X-Ray, Phase Contrast Microscopy, and Histological images, present problems like high variability in terms of the human anatomy and variation in modalities. Recent advances made in computer-aided diagnosis of histological images help facilitate detection and classification of diseases. Since most pathology diagnosis depends on the expertise and ability of the pathologist, there is clearly a need for an automated assessment system. Histological images are stained to a specific color to differentiate each component in the tissue. Segmentation and analysis of such images is problematic, as they present high variability in terms of color and cell clusters. This paper presents an adaptive thresholding technique that aims at segmenting cell structures from Haematoxylin and Eosin stained images. The thresholded result can further be used by pathologists to perform effective diagnosis. The effectiveness of the proposed method is analyzed by visually comparing the results to the state of art thresholding methods such as Otsu, Niblack, Sauvola, Bernsen, and Wolf. Computer simulations demonstrate the efficiency of the proposed method in segmenting critical information.

  2. Brain blood vessel segmentation using line-shaped profiles

    NASA Astrophysics Data System (ADS)

    Babin, Danilo; Pižurica, Aleksandra; De Vylder, Jonas; Vansteenkiste, Ewout; Philips, Wilfried

    2013-11-01

    Segmentation of cerebral blood vessels is of great importance in diagnostic and clinical applications, especially for embolization of cerebral aneurysms and arteriovenous malformations (AVMs). In order to perform embolization of the AVM, the structural and geometric information of blood vessels from 3D images is of utmost importance. For this reason, the in-depth segmentation of cerebral blood vessels is usually done as a fusion of different segmentation techniques, often requiring extensive user interaction. In this paper we introduce the idea of line-shaped profiling with an application to brain blood vessel and AVM segmentation, efficient both in terms of resolving details and in terms of computation time. Our method takes into account both local proximate and wider neighbourhood of the processed pixel, which makes it efficient for segmenting large blood vessel tree structures, as well as fine structures of the AVMs. Another advantage of our method is that it requires selection of only one parameter to perform segmentation, yielding very little user interaction.

  3. Automatic graph-cut based segmentation of bones from knee magnetic resonance images for osteoarthritis research.

    PubMed

    Ababneh, Sufyan Y; Prescott, Jeff W; Gurcan, Metin N

    2011-08-01

    In this paper, a new, fully automated, content-based system is proposed for knee bone segmentation from magnetic resonance images (MRI). The purpose of the bone segmentation is to support the discovery and characterization of imaging biomarkers for the incidence and progression of osteoarthritis, a debilitating joint disease, which affects a large portion of the aging population. The segmentation algorithm includes a novel content-based, two-pass disjoint block discovery mechanism, which is designed to support automation, segmentation initialization, and post-processing. The block discovery is achieved by classifying the image content to bone and background blocks according to their similarity to the categories in the training data collected from typical bone structures. The classified blocks are then used to design an efficient graph-cut based segmentation algorithm. This algorithm requires constructing a graph using image pixel data followed by applying a maximum-flow algorithm which generates a minimum graph-cut that corresponds to an initial image segmentation. Content-based refinements and morphological operations are then applied to obtain the final segmentation. The proposed segmentation technique does not require any user interaction and can distinguish between bone and highly similar adjacent structures, such as fat tissues with high accuracy. The performance of the proposed system is evaluated by testing it on 376 MR images from the Osteoarthritis Initiative (OAI) database. This database included a selection of single images containing the femur and tibia from 200 subjects with varying levels of osteoarthritis severity. Additionally, a full three-dimensional segmentation of the bones from ten subjects with 14 slices each, and synthetic images with background having intensity and spatial characteristics similar to those of bone are used to assess the robustness and consistency of the developed algorithm. The results show an automatic bone detection rate of 0.99 and an average segmentation accuracy of 0.95 using the Dice similarity index. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. A practical model for the train-set utilization: The case of Beijing-Tianjin passenger dedicated line in China

    PubMed Central

    Li, Xiaomeng; Yang, Zhuo

    2017-01-01

    As a sustainable transportation mode, high-speed railway (HSR) has become an efficient way to meet the huge travel demand. However, due to the high acquisition and maintenance cost, it is impossible to build enough infrastructure and purchase enough train-sets. Great efforts are required to improve the transport capability of HSR. The utilization efficiency of train-sets (carrying tools of HSR) is one of the most important factors of the transport capacity of HSR. In order to enhance the utilization efficiency of the train-sets, this paper proposed a train-set circulation optimization model to minimize the total connection time. An innovative two-stage approach which contains segments generation and segments combination was designed to solve this model. In order to verify the feasibility of the proposed approach, an experiment was carried out in the Beijing-Tianjin passenger dedicated line, to fulfill a 174 trips train diagram. The model results showed that compared with the traditional Ant Colony Algorithm (ACA), the utilization efficiency of train-sets can be increased from 43.4% (ACA) to 46.9% (Two-Stage), and 1 train-set can be saved up to fulfill the same transportation tasks. The approach proposed in the study is faster and more stable than the traditional ones, by using which, the HSR staff can draw up the train-sets circulation plan more quickly and the utilization efficiency of the HSR system is also improved. PMID:28489933

  5. Analysis of simulated angiographic procedures. Part 2: extracting efficiency data from audio and video recordings.

    PubMed

    Duncan, James R; Kline, Benjamin; Glaiberman, Craig B

    2007-04-01

    To create and test methods of extracting efficiency data from recordings of simulated renal stent procedures. Task analysis was performed and used to design a standardized testing protocol. Five experienced angiographers then performed 16 renal stent simulations using the Simbionix AngioMentor angiographic simulator. Audio and video recordings of these simulations were captured from multiple vantage points. The recordings were synchronized and compiled. A series of efficiency metrics (procedure time, contrast volume, and tool use) were then extracted from the recordings. The intraobserver and interobserver variability of these individual metrics was also assessed. The metrics were converted to costs and aggregated to determine the fixed and variable costs of a procedure segment or the entire procedure. Task analysis and pilot testing led to a standardized testing protocol suitable for performance assessment. Task analysis also identified seven checkpoints that divided the renal stent simulations into six segments. Efficiency metrics for these different segments were extracted from the recordings and showed excellent intra- and interobserver correlations. Analysis of the individual and aggregated efficiency metrics demonstrated large differences between segments as well as between different angiographers. These differences persisted when efficiency was expressed as either total or variable costs. Task analysis facilitated both protocol development and data analysis. Efficiency metrics were readily extracted from recordings of simulated procedures. Aggregating the metrics and dividing the procedure into segments revealed potential insights that could be easily overlooked because the simulator currently does not attempt to aggregate the metrics and only provides data derived from the entire procedure. The data indicate that analysis of simulated angiographic procedures will be a powerful method of assessing performance in interventional radiology.

  6. An Efficient Algorithm for Server Thermal Fault Diagnosis Based on Infrared Image

    NASA Astrophysics Data System (ADS)

    Liu, Hang; Xie, Ting; Ran, Jian; Gao, Shan

    2017-10-01

    It is essential for a data center to maintain server security and stability. Long-time overload operation or high room temperature may cause service disruption even a server crash, which would result in great economic loss for business. Currently, the methods to avoid server outages are monitoring and forecasting. Thermal camera can provide fine texture information for monitoring and intelligent thermal management in large data center. This paper presents an efficient method for server thermal fault monitoring and diagnosis based on infrared image. Initially thermal distribution of server is standardized and the interest regions of the image are segmented manually. Then the texture feature, Hu moments feature as well as modified entropy feature are extracted from the segmented regions. These characteristics are applied to analyze and classify thermal faults, and then make efficient energy-saving thermal management decisions such as job migration. For the larger feature space, the principal component analysis is employed to reduce the feature dimensions, and guarantee high processing speed without losing the fault feature information. Finally, different feature vectors are taken as input for SVM training, and do the thermal fault diagnosis after getting the optimized SVM classifier. This method supports suggestions for optimizing data center management, it can improve air conditioning efficiency and reduce the energy consumption of the data center. The experimental results show that the maximum detection accuracy is 81.5%.

  7. Synthesis, morphology and dynamics of polyureas and their lithium ionomers

    NASA Astrophysics Data System (ADS)

    Chuayprakong, Sunanta

    Electrolytes currently used in commercial lithium ion batteries have led to leakage and safety issues. Solvent-free solid polymer electrolytes (SPEs) offering high energy density are promising materials for lithium battery applications. SPEs require high modulus to separate the electrodes and suppress lithium dendrite growth. Microphase separation of the hard segments in amorphous polyureas (PUs) yields materials with higher moduli than typical low glass transition temperature (Tg) polymers. In this dissertation, several families of solution polymerized polyether-based PU ionomers were synthesized and their thermal, morphology and dynamic properties characterized as a function of chemical composition. In the initial phase of this investigation, polyethylene oxide (PEO) diamines (with molecular weights = 200, 600, 1050, 2000, 3000 and 6000 g/mol) were polymerized with 4,4' methylene diphenyl diisocyanate (MDI). PUs with 200 and 600 g/mol PEO soft segments are amorphous and single phase. The amorphous PU having 1050 g/mol PEO segments exhibits a small degree of phase separation, as demonstrated by X-ray scattering. PUs with 2000, 3000 and 6000 g/mol PEO soft segments are semicrystalline and their melting points and degrees of crystallinity are lower than those of the precursor PEO diamines due to their attachment to rigid hard segments. Even though polypropylene oxide (PPO) does not dissolve cations as efficiently as PEO, PPO is not crystallizable and was chosen to create a second family of amorphous PUs. PPO-containing diamines ((Jeff400 (MW = 400 g/mol) and Jeff2000 (MW = 2000 g/mol)) and MDI were chosen as the neutral soft segment and the hard segment, respectively. 2,5-diaminobenzene sulfonate was successfully synthesized and used for preparing ionomers. The amount of ionic species in these ionomers was varied and quantified using 1H-NMR. Single Tgs were observed and they increased with increasing ionic content. No X-ray scattering peaks corresponding to microphase separation of hard and soft segments were detected, nor were ordered hydrogen bonded carbonyl bands in FTIR spectra, demonstrating that the Jeff400 PUs are single phase. Using dielectric relaxation spectroscopy (DRS), segmental relaxation temperatures also increase with increasing ionic species content.. Increasing the number of ionic groups increases the hard segment content, which results in higher DSC Tgs and slower fmaxs for the segmental relaxation processes. For the non-ionic and all of the ionic Jeff2000 PU samples that contain some nonionic soft segments, low temperature Tgs were observed that arise from microphase separated soft phases. X-ray scattering peaks related to microphase separation and ordered hydrogen bonded carbonyl bands were observed, reinforcing the conclusion of hard/soft segment segregation. The DRS segmental relaxation is associated with soft phase relaxation, with some of the ion dipoles participating in this process for the ionic samples. The ionomers could not be dialyzed due to water insolubility, but were purified by multiple precipitation in deionize water. Nevertheless, the findings suggest that the observed conductivity primarily arises from ionic impurities. A third family of PU ionomers was synthesized using an amorphous polypropylene oxide-b- polyethylene oxide-b-polypropylene oxide diamine (ED900, MW = 900 g/mol, 68% EO) and 2,5-diaminobenzene sulfonate. Hexamethylene diisocyanate was utilized as the hard segment as its high packing efficiency is known to facilitate microphase separation. The non-ionic ED900 PU and its ionomers with various ion contents were successfully synthesized. Low Tgs due to segregation of soft segments, X-ray scattering peaks related to microphase separation between segments, and ordered hydrogen bonded carbonyl bands were detected. Tapping mode atomic force microscopy was also used to explore the morphology of these microphase separated materials. DRS segmental relaxations are associated with soft phase. These materials were extensively dialyzed and their low conductivities suggest that the lithium ions are primarily trapped in hard domains.

  8. Reproducibility of myelin content-based human habenula segmentation at 3 Tesla.

    PubMed

    Kim, Joo-Won; Naidich, Thomas P; Joseph, Joshmi; Nair, Divya; Glasser, Matthew F; O'halloran, Rafael; Doucet, Gaelle E; Lee, Won Hee; Krinsky, Hannah; Paulino, Alejandro; Glahn, David C; Anticevic, Alan; Frangou, Sophia; Xu, Junqian

    2018-03-26

    In vivo morphological study of the human habenula, a pair of small epithalamic nuclei adjacent to the dorsomedial thalamus, has recently gained significant interest for its role in reward and aversion processing. However, segmenting the habenula from in vivo magnetic resonance imaging (MRI) is challenging due to the habenula's small size and low anatomical contrast. Although manual and semi-automated habenula segmentation methods have been reported, the test-retest reproducibility of the segmented habenula volume and the consistency of the boundaries of habenula segmentation have not been investigated. In this study, we evaluated the intra- and inter-site reproducibility of in vivo human habenula segmentation from 3T MRI (0.7-0.8 mm isotropic resolution) using our previously proposed semi-automated myelin contrast-based method and its fully-automated version, as well as a previously published manual geometry-based method. The habenula segmentation using our semi-automated method showed consistent boundary definition (high Dice coefficient, low mean distance, and moderate Hausdorff distance) and reproducible volume measurement (low coefficient of variation). Furthermore, the habenula boundary in our semi-automated segmentation from 3T MRI agreed well with that in the manual segmentation from 7T MRI (0.5 mm isotropic resolution) of the same subjects. Overall, our proposed semi-automated habenula segmentation showed reliable and reproducible habenula localization, while its fully-automated version offers an efficient way for large sample analysis. © 2018 Wiley Periodicals, Inc.

  9. Mobile satellite communications technology - A summary of NASA activities

    NASA Technical Reports Server (NTRS)

    Dutzi, E. J.; Knouse, G. H.

    1986-01-01

    Studies in recent years indicate that future high-capacity mobile satellite systems are viable only if certain high-risk enabling technologies are developed. Accordingly, NASA has structured an advanced technology development program aimed at efficient utilization of orbit, spectrum, and power. Over the last two years, studies have concentrated on developing concepts and identifying cost drivers and other issues associated with the major technical areas of emphasis: vehicle antennas, speech compression, bandwidth-efficient digital modems, network architecture, mobile satellite channel characterization, and selected space segment technology. The program is now entering the next phase - breadboarding, development, and field experimentation.

  10. A fast 3D region growing approach for CT angiography applications

    NASA Astrophysics Data System (ADS)

    Ye, Zhen; Lin, Zhongmin; Lu, Cheng-chang

    2004-05-01

    Region growing is one of the most popular methods for low-level image segmentation. Many researches on region growing have focused on the definition of the homogeneity criterion or growing and merging criterion. However, one disadvantage of conventional region growing is redundancy. It requires a large memory usage, and the computation-efficiency is very low especially for 3D images. To overcome this problem, a non-recursive single-pass 3D region growing algorithm named SymRG is implemented and successfully applied to 3D CT angiography (CTA) applications for vessel segmentation and bone removal. The method consists of three steps: segmenting one-dimensional regions of each row; doing region merging to adjacent rows to obtain the region segmentation of each slice; and doing region merging to adjacent slices to obtain the final region segmentation of 3D images. To improve the segmentation speed for very large volume 3D CTA images, this algorithm is applied repeatedly to newly updated local cubes. The next new cube can be estimated by checking isolated segmented regions on all 6 faces of the current local cube. This local non-recursive 3D region-growing algorithm is memory-efficient and computation-efficient. Clinical testings of this algorithm on Brain CTA show this technique could effectively remove whole skull, most of the bones on the skull base, and reveal the cerebral vascular structures clearly.

  11. Regulating Molecular Aggregations of Polymers via Ternary Copolymerization Strategy for Efficient Solar Cells.

    PubMed

    Wang, Qian; Wang, Yingying; Zheng, Wei; Shahid, Bilal; Qiu, Meng; Wang, Di; Zhu, Dangqiang; Yang, Renqiang

    2017-09-20

    For many high-performance photovoltaic materials in polymer solar cells (PSCs), the active layers usually need to be spin-coated at high temperature due to the strong intermolecular aggregation of donor polymers, which is unfavorable in device repeatability and large-scale PSC printing. In this work, we adopted a ternary copolymerization strategy to regulate polymer solubility and molecular aggregation. A series of D-A 1 -D-A 2 random polymers based on different acceptors, strong electron-withdrawing unit ester substituted thieno[3,4-b]thiophene (TT-E), and highly planar dithiazole linked TT-E (DTzTT) were constructed to realize the regulation of molecular aggregation and simplification of device fabrication. The results showed that as the relative proportion of TT-E segment in the backbone increased, the absorption evidently red-shifted with a gradually decreased aggregation in solution, eventually leading to the active layers that can be fabricated at low temperature. Furthermore, due to the excellent phase separation and low recombination, the optimized solar cells based on the terpolymer P1 containing 30% of TT-E segment exhibit high power conversion efficiency (PCE) of 9.09% with a significantly enhanced fill factor up to 72.86%. Encouragingly, the photovoltaic performance is insensitive to the fabrication temperature of the active layer, and it still could maintain high PCE of 8.82%, even at room temperature. This work not only develops the highly efficient photovoltaic materials for low temperature processed PSCs through ternary copolymerization strategy but also preliminarily constructs the relationship between aggregation and photovoltaic performance.

  12. Exact analytical modeling of magnetic vector potential in surface inset permanent magnet DC machines considering magnet segmentation

    NASA Astrophysics Data System (ADS)

    Jabbari, Ali

    2018-01-01

    Surface inset permanent magnet DC machine can be used as an alternative in automation systems due to their high efficiency and robustness. Magnet segmentation is a common technique in order to mitigate pulsating torque components in permanent magnet machines. An accurate computation of air-gap magnetic field distribution is necessary in order to calculate machine performance. An exact analytical method for magnetic vector potential calculation in surface inset permanent magnet machines considering magnet segmentation has been proposed in this paper. The analytical method is based on the resolution of Laplace and Poisson equations as well as Maxwell equation in polar coordinate by using sub-domain method. One of the main contributions of the paper is to derive an expression for the magnetic vector potential in the segmented PM region by using hyperbolic functions. The developed method is applied on the performance computation of two prototype surface inset magnet segmented motors with open circuit and on load conditions. The results of these models are validated through FEM method.

  13. Adaptive distance metric learning for diffusion tensor image segmentation.

    PubMed

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.

  14. Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation

    PubMed Central

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858

  15. Preparation and Physical Properties of Segmented Thermoelectric YBa2Cu3O7-x -Ca3Co4O9 Ceramics

    NASA Astrophysics Data System (ADS)

    Wannasut, P.; Keawprak, N.; Jaiban, P.; Watcharapasorn, A.

    2018-01-01

    Segmented thermoelectric ceramics are now well known for their high conversion efficiency and are currently being investigated in both basic and applied energy researches. In this work, the successful preparation of the segmented thermoelectric YBa2Cu3O7-x -Ca3Co4O9 (YBCO-CCO) ceramic by hot pressing method and the study on its physical properties were presented. Under the optimum hot pressing condition of 800 °C temperature, 1-hour holding time and 1-ton weight, the segmented YBCO-CCO sample showed two strongly connected layers with the relative density of about 96%. The X-ray diffraction (XRD) patterns indicated that each segment showed pure phase corresponding to each respective composition. Scanning electron microscopy (SEM) results confirmed the sharp interface and good adhesion between YBCO and CCO layers. Although the chemical analysis indicated the limited inter-layer diffusion near the interface, some elemental diffusion at the boundary was expected to be the source of this strong bonding.

  16. Deep convolutional neural network and 3D deformable approach for tissue segmentation in musculoskeletal magnetic resonance imaging.

    PubMed

    Liu, Fang; Zhou, Zhaoye; Jang, Hyungseok; Samsonov, Alexey; Zhao, Gengyan; Kijowski, Richard

    2018-04-01

    To describe and evaluate a new fully automated musculoskeletal tissue segmentation method using deep convolutional neural network (CNN) and three-dimensional (3D) simplex deformable modeling to improve the accuracy and efficiency of cartilage and bone segmentation within the knee joint. A fully automated segmentation pipeline was built by combining a semantic segmentation CNN and 3D simplex deformable modeling. A CNN technique called SegNet was applied as the core of the segmentation method to perform high resolution pixel-wise multi-class tissue classification. The 3D simplex deformable modeling refined the output from SegNet to preserve the overall shape and maintain a desirable smooth surface for musculoskeletal structure. The fully automated segmentation method was tested using a publicly available knee image data set to compare with currently used state-of-the-art segmentation methods. The fully automated method was also evaluated on two different data sets, which include morphological and quantitative MR images with different tissue contrasts. The proposed fully automated segmentation method provided good segmentation performance with segmentation accuracy superior to most of state-of-the-art methods in the publicly available knee image data set. The method also demonstrated versatile segmentation performance on both morphological and quantitative musculoskeletal MR images with different tissue contrasts and spatial resolutions. The study demonstrates that the combined CNN and 3D deformable modeling approach is useful for performing rapid and accurate cartilage and bone segmentation within the knee joint. The CNN has promising potential applications in musculoskeletal imaging. Magn Reson Med 79:2379-2391, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  17. Automated Slide Scanning and Segmentation in Fluorescently-labeled Tissues Using a Widefield High-content Analysis System.

    PubMed

    Poon, Candice C; Ebacher, Vincent; Liu, Katherine; Yong, Voon Wee; Kelly, John James Patrick

    2018-05-03

    Automated slide scanning and segmentation of fluorescently-labeled tissues is the most efficient way to analyze whole slides or large tissue sections. Unfortunately, many researchers spend large amounts of time and resources developing and optimizing workflows that are only relevant to their own experiments. In this article, we describe a protocol that can be used by those with access to a widefield high-content analysis system (WHCAS) to image any slide-mounted tissue, with options for customization within pre-built modules found in the associated software. Not originally intended for slide scanning, the steps detailed in this article make it possible to acquire slide scanning images in the WHCAS which can be imported into the associated software. In this example, the automated segmentation of brain tumor slides is demonstrated, but the automated segmentation of any fluorescently-labeled nuclear or cytoplasmic marker is possible. Furthermore, there are a variety of other quantitative software modules including assays for protein localization/translocation, cellular proliferation/viability/apoptosis, and angiogenesis that can be run. This technique will save researchers time and effort and create an automated protocol for slide analysis.

  18. A unified framework for automatic wound segmentation and analysis with deep convolutional neural networks.

    PubMed

    Wang, Changhan; Yan, Xinchen; Smith, Max; Kochhar, Kanika; Rubin, Marcie; Warren, Stephen M; Wrobel, James; Lee, Honglak

    2015-01-01

    Wound surface area changes over multiple weeks are highly predictive of the wound healing process. Furthermore, the quality and quantity of the tissue in the wound bed also offer important prognostic information. Unfortunately, accurate measurements of wound surface area changes are out of reach in the busy wound practice setting. Currently, clinicians estimate wound size by estimating wound width and length using a scalpel after wound treatment, which is highly inaccurate. To address this problem, we propose an integrated system to automatically segment wound regions and analyze wound conditions in wound images. Different from previous segmentation techniques which rely on handcrafted features or unsupervised approaches, our proposed deep learning method jointly learns task-relevant visual features and performs wound segmentation. Moreover, learned features are applied to further analysis of wounds in two ways: infection detection and healing progress prediction. To the best of our knowledge, this is the first attempt to automate long-term predictions of general wound healing progress. Our method is computationally efficient and takes less than 5 seconds per wound image (480 by 640 pixels) on a typical laptop computer. Our evaluations on a large-scale wound database demonstrate the effectiveness and reliability of the proposed system.

  19. A two-stage approach for fully automatic segmentation of venous vascular structures in liver CT images

    NASA Astrophysics Data System (ADS)

    Kaftan, Jens N.; Tek, Hüseyin; Aach, Til

    2009-02-01

    The segmentation of the hepatic vascular tree in computed tomography (CT) images is important for many applications such as surgical planning of oncological resections and living liver donations. In surgical planning, vessel segmentation is often used as basis to support the surgeon in the decision about the location of the cut to be performed and the extent of the liver to be removed, respectively. We present a novel approach to hepatic vessel segmentation that can be divided into two stages. First, we detect and delineate the core vessel components efficiently with a high specificity. Second, smaller vessel branches are segmented by a robust vessel tracking technique based on a medialness filter response, which starts from the terminal points of the previously segmented vessels. Specifically, in the first phase major vessels are segmented using the globally optimal graphcuts algorithm in combination with foreground and background seed detection, while the computationally more demanding tracking approach needs to be applied only locally in areas of smaller vessels within the second stage. The method has been evaluated on contrast-enhanced liver CT scans from clinical routine showing promising results. In addition to the fully-automatic instance of this method, the vessel tracking technique can also be used to easily add missing branches/sub-trees to an already existing segmentation result by adding single seed-points.

  20. Segmentation of whole cells and cell nuclei from 3-D optical microscope images using dynamic programming.

    PubMed

    McCullough, D P; Gudla, P R; Harris, B S; Collins, J A; Meaburn, K J; Nakaya, M A; Yamaguchi, T P; Misteli, T; Lockett, S J

    2008-05-01

    Communications between cells in large part drive tissue development and function, as well as disease-related processes such as tumorigenesis. Understanding the mechanistic bases of these processes necessitates quantifying specific molecules in adjacent cells or cell nuclei of intact tissue. However, a major restriction on such analyses is the lack of an efficient method that correctly segments each object (cell or nucleus) from 3-D images of an intact tissue specimen. We report a highly reliable and accurate semi-automatic algorithmic method for segmenting fluorescence-labeled cells or nuclei from 3-D tissue images. Segmentation begins with semi-automatic, 2-D object delineation in a user-selected plane, using dynamic programming (DP) to locate the border with an accumulated intensity per unit length greater that any other possible border around the same object. Then the two surfaces of the object in planes above and below the selected plane are found using an algorithm that combines DP and combinatorial searching. Following segmentation, any perceived errors can be interactively corrected. Segmentation accuracy is not significantly affected by intermittent labeling of object surfaces, diffuse surfaces, or spurious signals away from surfaces. The unique strength of the segmentation method was demonstrated on a variety of biological tissue samples where all cells, including irregularly shaped cells, were accurately segmented based on visual inspection.

  1. Image segmentation by hierarchial agglomeration of polygons using ecological statistics

    DOEpatents

    Prasad, Lakshman; Swaminarayan, Sriram

    2013-04-23

    A method for rapid hierarchical image segmentation based on perceptually driven contour completion and scene statistics is disclosed. The method begins with an initial fine-scale segmentation of an image, such as obtained by perceptual completion of partial contours into polygonal regions using region-contour correspondences established by Delaunay triangulation of edge pixels as implemented in VISTA. The resulting polygons are analyzed with respect to their size and color/intensity distributions and the structural properties of their boundaries. Statistical estimates of granularity of size, similarity of color, texture, and saliency of intervening boundaries are computed and formulated into logical (Boolean) predicates. The combined satisfiability of these Boolean predicates by a pair of adjacent polygons at a given segmentation level qualifies them for merging into a larger polygon representing a coarser, larger-scale feature of the pixel image and collectively obtains the next level of polygonal segments in a hierarchy of fine-to-coarse segmentations. The iterative application of this process precipitates textured regions as polygons with highly convolved boundaries and helps distinguish them from objects which typically have more regular boundaries. The method yields a multiscale decomposition of an image into constituent features that enjoy a hierarchical relationship with features at finer and coarser scales. This provides a traversable graph structure from which feature content and context in terms of other features can be derived, aiding in automated image understanding tasks. The method disclosed is highly efficient and can be used to decompose and analyze large images.

  2. Efficiency Benefits Using the Terminal Area Precision Scheduling and Spacing System

    NASA Technical Reports Server (NTRS)

    Thipphavong, Jane; Swenson, Harry N.; Lin, Paul; Seo, Anthony Y.; Bagasol, Leonard N.

    2011-01-01

    NASA has developed a capability for terminal area precision scheduling and spacing (TAPSS) to increase the use of fuel-efficient arrival procedures during periods of traffic congestion at a high-density airport. Sustained use of fuel-efficient procedures throughout the entire arrival phase of flight reduces overall fuel burn, greenhouse gas emissions and noise pollution. The TAPSS system is a 4D trajectory-based strategic planning and control tool that computes schedules and sequences for arrivals to facilitate optimal profile descents. This paper focuses on quantifying the efficiency benefits associated with using the TAPSS system, measured by reduction of level segments during aircraft descent and flight distance and time savings. The TAPSS system was tested in a series of human-in-the-loop simulations and compared to current procedures. Compared to the current use of the TMA system, simulation results indicate a reduction of total level segment distance by 50% and flight distance and time savings by 7% in the arrival portion of flight (200 nm from the airport). The TAPSS system resulted in aircraft maintaining continuous descent operations longer and with more precision, both achieved under heavy traffic demand levels.

  3. Parallel and Efficient Sensitivity Analysis of Microscopy Image Segmentation Workflows in Hybrid Systems

    PubMed Central

    Barreiros, Willian; Teodoro, George; Kurc, Tahsin; Kong, Jun; Melo, Alba C. M. A.; Saltz, Joel

    2017-01-01

    We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies. PMID:29081725

  4. Comparison of liver volumetry on contrast‐enhanced CT images: one semiautomatic and two automatic approaches

    PubMed Central

    Cai, Wei; He, Baochun; Fang, Chihua

    2016-01-01

    This study was to evaluate the accuracy, consistency, and efficiency of three liver volumetry methods— one interactive method, an in‐house‐developed 3D medical Image Analysis (3DMIA) system, one automatic active shape model (ASM)‐based segmentation, and one automatic probabilistic atlas (PA)‐guided segmentation method on clinical contrast‐enhanced CT images. Forty‐two datasets, including 27 normal liver and 15 space‐occupying liver lesion patients, were retrospectively included in this study. The three methods — one semiautomatic 3DMIA, one automatic ASM‐based, and one automatic PA‐based liver volumetry — achieved an accuracy with VD (volume difference) of −1.69%,−2.75%, and 3.06% in the normal group, respectively, and with VD of −3.20%,−3.35%, and 4.14% in the space‐occupying lesion group, respectively. However, the three methods achieved an efficiency of 27.63 mins, 1.26 mins, 1.18 mins on average, respectively, compared with the manual volumetry, which took 43.98 mins. The high intraclass correlation coefficient between the three methods and the manual method indicated an excellent agreement on liver volumetry. Significant differences in segmentation time were observed between the three methods (3DMIA, ASM, and PA) and the manual volumetry (p<0.001), as well as between the automatic volumetries (ASM and PA) and the semiautomatic volumetry (3DMIA) (p<0.001). The semiautomatic interactive 3DMIA, automatic ASM‐based, and automatic PA‐based liver volumetry agreed well with manual gold standard in both the normal liver group and the space‐occupying lesion group. The ASM‐ and PA‐based automatic segmentation have better efficiency in clinical use. PACS number(s): 87.55.‐x PMID:27929487

  5. Comparison of liver volumetry on contrast-enhanced CT images: one semiautomatic and two automatic approaches.

    PubMed

    Cai, Wei; He, Baochun; Fan, Yingfang; Fang, Chihua; Jia, Fucang

    2016-11-08

    This study was to evaluate the accuracy, consistency, and efficiency of three liver volumetry methods- one interactive method, an in-house-developed 3D medical Image Analysis (3DMIA) system, one automatic active shape model (ASM)-based segmentation, and one automatic probabilistic atlas (PA)-guided segmentation method on clinical contrast-enhanced CT images. Forty-two datasets, including 27 normal liver and 15 space-occupying liver lesion patients, were retrospectively included in this study. The three methods - one semiautomatic 3DMIA, one automatic ASM-based, and one automatic PA-based liver volumetry - achieved an accuracy with VD (volume difference) of -1.69%, -2.75%, and 3.06% in the normal group, respectively, and with VD of -3.20%, -3.35%, and 4.14% in the space-occupying lesion group, respectively. However, the three methods achieved an efficiency of 27.63 mins, 1.26 mins, 1.18 mins on average, respectively, compared with the manual volumetry, which took 43.98 mins. The high intraclass correlation coefficient between the three methods and the manual method indicated an excel-lent agreement on liver volumetry. Significant differences in segmentation time were observed between the three methods (3DMIA, ASM, and PA) and the manual volumetry (p < 0.001), as well as between the automatic volumetries (ASM and PA) and the semiautomatic volumetry (3DMIA) (p < 0.001). The semiautomatic interactive 3DMIA, automatic ASM-based, and automatic PA-based liver volum-etry agreed well with manual gold standard in both the normal liver group and the space-occupying lesion group. The ASM- and PA-based automatic segmentation have better efficiency in clinical use. © 2016 The Authors.

  6. Efficient threshold for volumetric segmentation

    NASA Astrophysics Data System (ADS)

    Burdescu, Dumitru D.; Brezovan, Marius; Stanescu, Liana; Stoica Spahiu, Cosmin; Ebanca, Daniel

    2015-07-01

    Image segmentation plays a crucial role in effective understanding of digital images. However, the research on the existence of general purpose segmentation algorithm that suits for variety of applications is still very much active. Among the many approaches in performing image segmentation, graph based approach is gaining popularity primarily due to its ability in reflecting global image properties. Volumetric image segmentation can simply result an image partition composed by relevant regions, but the most fundamental challenge in segmentation algorithm is to precisely define the volumetric extent of some object, which may be represented by the union of multiple regions. The aim in this paper is to present a new method to detect visual objects from color volumetric images and efficient threshold. We present a unified framework for volumetric image segmentation and contour extraction that uses a virtual tree-hexagonal structure defined on the set of the image voxels. The advantage of using a virtual tree-hexagonal network superposed over the initial image voxels is that it reduces the execution time and the memory space used, without losing the initial resolution of the image.

  7. Diagnostic accuracy of ovarian cyst segmentation in B-mode ultrasound images

    NASA Astrophysics Data System (ADS)

    Bibicu, Dorin; Moraru, Luminita; Stratulat (Visan), Mirela

    2013-11-01

    Cystic and polycystic ovary syndrome is an endocrine disorder affecting women in the fertile age. The Moore Neighbor Contour, Watershed Method, Active Contour Models, and a recent method based on Active Contour Model with Selective Binary and Gaussian Filtering Regularized Level Set (ACM&SBGFRLS) techniques were used in this paper to detect the border of the ovarian cyst from echography images. In order to analyze the efficiency of the segmentation an original computer aided software application developed in MATLAB was proposed. The results of the segmentation were compared and evaluated against the reference contour manually delineated by a sonography specialist. Both the accuracy and time complexity of the segmentation tasks are investigated. The Fréchet distance (FD) as a similarity measure between two curves and the area error rate (AER) parameter as the difference between the segmented areas are used as estimators of the segmentation accuracy. In this study, the most efficient methods for the segmentation of the ovarian were analyzed cyst. The research was carried out on a set of 34 ultrasound images of the ovarian cyst.

  8. On the evaluation of segmentation editing tools

    PubMed Central

    Heckel, Frank; Moltz, Jan H.; Meine, Hans; Geisler, Benjamin; Kießling, Andreas; D’Anastasi, Melvin; dos Santos, Daniel Pinto; Theruvath, Ashok Joseph; Hahn, Horst K.

    2014-01-01

    Abstract. Efficient segmentation editing tools are important components in the segmentation process, as no automatic methods exist that always generate sufficient results. Evaluating segmentation editing algorithms is challenging, because their quality depends on the user’s subjective impression. So far, no established methods for an objective, comprehensive evaluation of such tools exist and, particularly, intermediate segmentation results are not taken into account. We discuss the evaluation of editing algorithms in the context of tumor segmentation in computed tomography. We propose a rating scheme to qualitatively measure the accuracy and efficiency of editing tools in user studies. In order to objectively summarize the overall quality, we propose two scores based on the subjective rating and the quantified segmentation quality over time. Finally, a simulation-based evaluation approach is discussed, which allows a more reproducible evaluation without the need for human input. This automated evaluation complements user studies, allowing a more convincing evaluation, particularly during development, where frequent user studies are not possible. The proposed methods have been used to evaluate two dedicated editing algorithms on 131 representative tumor segmentations. We show how the comparison of editing algorithms benefits from the proposed methods. Our results also show the correlation of the suggested quality score with the qualitative ratings. PMID:26158063

  9. Fast Appearance Modeling for Automatic Primary Video Object Segmentation.

    PubMed

    Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong

    2016-02-01

    Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.

  10. Robust and efficient fiducial tracking for augmented reality in HD-laparoscopic video streams

    NASA Astrophysics Data System (ADS)

    Mueller, M.; Groch, A.; Baumhauer, M.; Maier-Hein, L.; Teber, D.; Rassweiler, J.; Meinzer, H.-P.; Wegner, In.

    2012-02-01

    Augmented Reality (AR) is a convenient way of porting information from medical images into the surgical field of view and can deliver valuable assistance to the surgeon, especially in laparoscopic procedures. In addition, high definition (HD) laparoscopic video devices are a great improvement over the previously used low resolution equipment. However, in AR applications that rely on real-time detection of fiducials from video streams, the demand for efficient image processing has increased due to the introduction of HD devices. We present an algorithm based on the well-known Conditional Density Propagation (CONDENSATION) algorithm which can satisfy these new demands. By incorporating a prediction around an already existing and robust segmentation algorithm, we can speed up the whole procedure while leaving the robustness of the fiducial segmentation untouched. For evaluation purposes we tested the algorithm on recordings from real interventions, allowing for a meaningful interpretation of the results. Our results show that we can accelerate the segmentation by a factor of 3.5 on average. Moreover, the prediction information can be used to compensate for fiducials that are temporarily occluded or out of scope, providing greater stability.

  11. Extrusion die and method

    DOEpatents

    Lipp, G. Daniel

    1994-05-03

    A method and die apparatus for manufacturing a honeycomb body of triangular cell cross-section and high cell density, the die having a combination of (i) feedholes feeding slot intersections and (ii) feedholes feeding slot segments not supplied from slot intersections, whereby a reduction in feedhole count is achieved while still retaining good extrusion efficiency and extrudate uniformity.

  12. Object Segmentation and Ground Truth in 3D Embryonic Imaging.

    PubMed

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.

  13. Object Segmentation and Ground Truth in 3D Embryonic Imaging

    PubMed Central

    Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.

    2016-01-01

    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860

  14. Nearest neighbor 3D segmentation with context features

    NASA Astrophysics Data System (ADS)

    Hristova, Evelin; Schulz, Heinrich; Brosch, Tom; Heinrich, Mattias P.; Nickisch, Hannes

    2018-03-01

    Automated and fast multi-label segmentation of medical images is challenging and clinically important. This paper builds upon a supervised machine learning framework that uses training data sets with dense organ annotations and vantage point trees to classify voxels in unseen images based on similarity of binary feature vectors extracted from the data. Without explicit model knowledge, the algorithm is applicable to different modalities and organs, and achieves high accuracy. The method is successfully tested on 70 abdominal CT and 42 pelvic MR images. With respect to ground truth, an average Dice overlap score of 0.76 for the CT segmentation of liver, spleen and kidneys is achieved. The mean score for the MR delineation of bladder, bones, prostate and rectum is 0.65. Additionally, we benchmark several variations of the main components of the method and reduce the computation time by up to 47% without significant loss of accuracy. The segmentation results are - for a nearest neighbor method - surprisingly accurate, robust as well as data and time efficient.

  15. Rearrangement of Influenza Virus Spliced Segments for the Development of Live-Attenuated Vaccines

    PubMed Central

    Nogales, Aitor; DeDiego, Marta L.; Topham, David J.

    2016-01-01

    ABSTRACT Influenza viral infections represent a serious public health problem, with influenza virus causing a contagious respiratory disease which is most effectively prevented through vaccination. Segments 7 (M) and 8 (NS) of the influenza virus genome encode mRNA transcripts that are alternatively spliced to express two different viral proteins. This study describes the generation, using reverse genetics, of three different recombinant influenza A/Puerto Rico/8/1934 (PR8) H1N1 viruses containing M or NS viral segments individually or modified M or NS viral segments combined in which the overlapping open reading frames of matrix 1 (M1)/M2 for the modified M segment and the open reading frames of nonstructural protein 1 (NS1)/nuclear export protein (NEP) for the modified NS segment were split by using the porcine teschovirus 1 (PTV-1) 2A autoproteolytic cleavage site. Viruses with an M split segment were impaired in replication at nonpermissive high temperatures, whereas high viral titers could be obtained at permissive low temperatures (33°C). Furthermore, viruses containing the M split segment were highly attenuated in vivo, while they retained their immunogenicity and provided protection against a lethal challenge with wild-type PR8. These results indicate that influenza viruses can be effectively attenuated by the rearrangement of spliced segments and that such attenuated viruses represent an excellent option as safe, immunogenic, and protective live-attenuated vaccines. Moreover, this is the first time in which an influenza virus containing a restructured M segment has been described. Reorganization of the M segment to encode M1 and M2 from two separate, nonoverlapping, independent open reading frames represents a useful tool to independently study mutations in the M1 and M2 viral proteins without affecting the other viral M product. IMPORTANCE Vaccination represents our best therapeutic option against influenza viral infections. However, the efficacy of current influenza vaccines is suboptimal, and novel approaches are necessary for the prevention of disease caused by this important human respiratory pathogen. In this work, we describe a novel approach to generate safer and more efficient live-attenuated influenza virus vaccines (LAIVs) based on recombinant viruses whose genomes encode nonoverlapping and independent M1/M2 (split M segment [Ms]) or both M1/M2 and NS1/NEP (Ms and split NS segment [NSs]) open reading frames. Viruses containing a modified M segment were highly attenuated in mice but were able to confer, upon a single intranasal immunization, complete protection against a lethal homologous challenge with wild-type virus. Notably, the protection efficacy conferred by our viruses with split M segments was better than that conferred by the current temperature-sensitive LAIV. Altogether, these results open a new avenue for the development of safer and more protective LAIVs on the basis of the reorganization of spliced viral RNA segments in the genome. PMID:27122587

  16. An integrated high resolution mass spectrometric data acquisition method for rapid screening of saponins in Panax notoginseng (Sanqi).

    PubMed

    Lai, Chang-Jiang-Sheng; Tan, Ting; Zeng, Su-Ling; Qi, Lian-Wen; Liu, Xin-Guang; Dong, Xin; Li, Ping; Liu, E-Hu

    2015-05-10

    The aim of this study was to develop a convenient method without pretreatments for nontarget discovery of interested compounds. The segment and exposure strategy, coupled with two mass spectrometer data acquisition methods was firstly proposed for screening the saponins in extract of Panax notoginseng (Sanqi) via high-performance liquid chromatography tandem quadrupole time-of-flight mass spectrometry (HPLC-QTOF/MS). By gradually removing certain major or moderate interference compounds, the developed segment and exposure strategy could significantly improve the detection efficiency for trace compounds. Moreover, the newly developed five-point screening approach based on a modified mass defect filter strategy and the visual isotopic ion technique was verified to be efficient and reliable in picking out the interested precursor ions. In total, 234 ginsenosides including 67 potential new ones were characterized or tentatively identified from the extract of Sanqi. Particularly, some unusual compounds containing the branched glycosyl group or new substituted acyl groups were firstly reported. The proposed integrated strategy held a strong promise for analyses of the complex mixtures. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Pathways to increase consumer trust in meat as a safe and wholesome food.

    PubMed

    Gellynck, Xavier; Verbeke, Wim; Vermeire, Bert

    2006-09-01

    This paper focuses on the effect of information about meat safety and wholesomeness on consumer trust based on several studies with data collected in Belgium. The research is grounded in the observation that despite the abundant rise of information through labelling, traceability systems and quality assurance schemes, the effect on consumer trust in meat as a safe and wholesome product is only limited. The overload and complexity of information on food products results in misunderstanding and misinterpretation. Functional traceability attributes such as organisational efficiency and chain monitoring are considered to be highly important but not as a basis for market segmentation. However, process traceability attributes such as origin and production method are of interest for particular market segments as a response to meat quality concerns. Quality assurance schemes and associated labels have a poor impact on consumers' perception. It is argued that the high interest of retailers in such schemes is driven by procurement management efficiency rather than safety or overall quality. Future research could concentrate on the distribution of costs and benefits associated with meat quality initiatives among the chain participants.

  18. Appliance Efficiency Standards and Price Discrimination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spurlock, Cecily Anna

    2013-05-08

    I explore the effects of two simultaneous changes in minimum energy efficiency and ENERGY STAR standards for clothes washers. Adapting the Mussa and Rosen (1978) and Ronnen (1991) second-degree price discrimination model, I demonstrate that clothes washer prices and menus adjusted to the new standards in patterns consistent with a market in which firms had been price discriminating. In particular, I show evidence of discontinuous price drops at the time the standards were imposed, driven largely by mid-low efficiency segments of the market. The price discrimination model predicts this result. On the other hand, in a perfectly competition market, pricesmore » should increase for these market segments. Additionally, new models proliferated in the highest efficiency market segment following the standard changes. Finally, I show that firms appeared to use different adaptation strategies at the two instances of the standards changing.« less

  19. A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.

    PubMed

    Khelifi, Lazhar; Mignotte, Max

    2017-08-01

    Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.

  20. Fast discovery and visualization of conserved regions in DNA sequences using quasi-alignment

    PubMed Central

    2013-01-01

    Background Next Generation Sequencing techniques are producing enormous amounts of biological sequence data and analysis becomes a major computational problem. Currently, most analysis, especially the identification of conserved regions, relies heavily on Multiple Sequence Alignment and its various heuristics such as progressive alignment, whose run time grows with the square of the number and the length of the aligned sequences and requires significant computational resources. In this work, we present a method to efficiently discover regions of high similarity across multiple sequences without performing expensive sequence alignment. The method is based on approximating edit distance between segments of sequences using p-mer frequency counts. Then, efficient high-throughput data stream clustering is used to group highly similar segments into so called quasi-alignments. Quasi-alignments have numerous applications such as identifying species and their taxonomic class from sequences, comparing sequences for similarities, and, as in this paper, discovering conserved regions across related sequences. Results In this paper, we show that quasi-alignments can be used to discover highly similar segments across multiple sequences from related or different genomes efficiently and accurately. Experiments on a large number of unaligned 16S rRNA sequences obtained from the Greengenes database show that the method is able to identify conserved regions which agree with known hypervariable regions in 16S rRNA. Furthermore, the experiments show that the proposed method scales well for large data sets with a run time that grows only linearly with the number and length of sequences, whereas for existing multiple sequence alignment heuristics the run time grows super-linearly. Conclusion Quasi-alignment-based algorithms can detect highly similar regions and conserved areas across multiple sequences. Since the run time is linear and the sequences are converted into a compact clustering model, we are able to identify conserved regions fast or even interactively using a standard PC. Our method has many potential applications such as finding characteristic signature sequences for families of organisms and studying conserved and variable regions in, for example, 16S rRNA. PMID:24564200

  1. Fast discovery and visualization of conserved regions in DNA sequences using quasi-alignment.

    PubMed

    Nagar, Anurag; Hahsler, Michael

    2013-01-01

    Next Generation Sequencing techniques are producing enormous amounts of biological sequence data and analysis becomes a major computational problem. Currently, most analysis, especially the identification of conserved regions, relies heavily on Multiple Sequence Alignment and its various heuristics such as progressive alignment, whose run time grows with the square of the number and the length of the aligned sequences and requires significant computational resources. In this work, we present a method to efficiently discover regions of high similarity across multiple sequences without performing expensive sequence alignment. The method is based on approximating edit distance between segments of sequences using p-mer frequency counts. Then, efficient high-throughput data stream clustering is used to group highly similar segments into so called quasi-alignments. Quasi-alignments have numerous applications such as identifying species and their taxonomic class from sequences, comparing sequences for similarities, and, as in this paper, discovering conserved regions across related sequences. In this paper, we show that quasi-alignments can be used to discover highly similar segments across multiple sequences from related or different genomes efficiently and accurately. Experiments on a large number of unaligned 16S rRNA sequences obtained from the Greengenes database show that the method is able to identify conserved regions which agree with known hypervariable regions in 16S rRNA. Furthermore, the experiments show that the proposed method scales well for large data sets with a run time that grows only linearly with the number and length of sequences, whereas for existing multiple sequence alignment heuristics the run time grows super-linearly. Quasi-alignment-based algorithms can detect highly similar regions and conserved areas across multiple sequences. Since the run time is linear and the sequences are converted into a compact clustering model, we are able to identify conserved regions fast or even interactively using a standard PC. Our method has many potential applications such as finding characteristic signature sequences for families of organisms and studying conserved and variable regions in, for example, 16S rRNA.

  2. Comparative study on the performance of textural image features for active contour segmentation.

    PubMed

    Moraru, Luminita; Moldovanu, Simona

    2012-07-01

    We present a computerized method for the semi-automatic detection of contours in ultrasound images. The novelty of our study is the introduction of a fast and efficient image function relating to parametric active contour models. This new function is a combination of the gray-level information and first-order statistical features, called standard deviation parameters. In a comprehensive study, the developed algorithm and the efficiency of segmentation were first tested for synthetic images. Tests were also performed on breast and liver ultrasound images. The proposed method was compared with the watershed approach to show its efficiency. The performance of the segmentation was estimated using the area error rate. Using the standard deviation textural feature and a 5×5 kernel, our curve evolution was able to produce results close to the minimal area error rate (namely 8.88% for breast images and 10.82% for liver images). The image resolution was evaluated using the contrast-to-gradient method. The experiments showed promising segmentation results.

  3. Building Roof Segmentation from Aerial Images Using a Line-and Region-Based Watershed Segmentation Technique

    PubMed Central

    Merabet, Youssef El; Meurie, Cyril; Ruichek, Yassine; Sbihi, Abderrahmane; Touahni, Raja

    2015-01-01

    In this paper, we present a novel strategy for roof segmentation from aerial images (orthophotoplans) based on the cooperation of edge- and region-based segmentation methods. The proposed strategy is composed of three major steps. The first one, called the pre-processing step, consists of simplifying the acquired image with an appropriate couple of invariant and gradient, optimized for the application, in order to limit illumination changes (shadows, brightness, etc.) affecting the images. The second step is composed of two main parallel treatments: on the one hand, the simplified image is segmented by watershed regions. Even if the first segmentation of this step provides good results in general, the image is often over-segmented. To alleviate this problem, an efficient region merging strategy adapted to the orthophotoplan particularities, with a 2D modeling of roof ridges technique, is applied. On the other hand, the simplified image is segmented by watershed lines. The third step consists of integrating both watershed segmentation strategies into a single cooperative segmentation scheme in order to achieve satisfactory segmentation results. Tests have been performed on orthophotoplans containing 100 roofs with varying complexity, and the results are evaluated with the VINETcriterion using ground-truth image segmentation. A comparison with five popular segmentation techniques of the literature demonstrates the effectiveness and the reliability of the proposed approach. Indeed, we obtain a good segmentation rate of 96% with the proposed method compared to 87.5% with statistical region merging (SRM), 84% with mean shift, 82% with color structure code (CSC), 80% with efficient graph-based segmentation algorithm (EGBIS) and 71% with JSEG. PMID:25648706

  4. Induction Consolidation of Thermoplastic Composites Using Smart Susceptors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsen, Marc R

    2012-06-14

    This project has focused on the area of energy efficient consolidation and molding of fiber reinforced thermoplastic composite components as an energy efficient alternative to the conventional processing methods such as autoclave processing. The expanding application of composite materials in wind energy, automotive, and aerospace provides an attractive energy efficiency target for process development. The intent is to have this efficient processing along with the recyclable thermoplastic materials ready for large scale application before these high production volume levels are reached. Therefore, the process can be implemented in a timely manner to realize the maximum economic, energy, and environmental efficiencies.more » Under this project an increased understanding of the use of induction heating with smart susceptors applied to consolidation of thermoplastic has been achieved. This was done by the establishment of processing equipment and tooling and the subsequent demonstration of this fabrication technology by consolidating/molding of entry level components for each of the participating industrial segments, wind energy, aerospace, and automotive. This understanding adds to the nation's capability to affordably manufacture high quality lightweight high performance components from advanced recyclable composite materials in a lean and energy efficient manner. The use of induction heating with smart susceptors is a precisely controlled low energy method for the consolidation and molding of thermoplastic composites. The smart susceptor provides intrinsic thermal control based on the interaction with the magnetic field from the induction coil thereby producing highly repeatable processing. The low energy usage is enabled by the fact that only the smart susceptor surface of the tool is heated, not the entire tool. Therefore much less mass is heated resulting in significantly less required energy to consolidate/mold the desired composite components. This energy efficiency results in potential energy savings of {approx}75% as compared to autoclave processing in aerospace, {approx}63% as compared to compression molding in automotive, and {approx}42% energy savings as compared to convectively heated tools in wind energy. The ability to make parts in a rapid and controlled manner provides significant economic advantages for each of the industrial segments. These attributes were demonstrated during the processing of the demonstration components on this project.« less

  5. Automated bone segmentation from large field of view 3D MR images of the hip joint

    NASA Astrophysics Data System (ADS)

    Xia, Ying; Fripp, Jurgen; Chandra, Shekhar S.; Schwarz, Raphael; Engstrom, Craig; Crozier, Stuart

    2013-10-01

    Accurate bone segmentation in the hip joint region from magnetic resonance (MR) images can provide quantitative data for examining pathoanatomical conditions such as femoroacetabular impingement through to varying stages of osteoarthritis to monitor bone and associated cartilage morphometry. We evaluate two state-of-the-art methods (multi-atlas and active shape model (ASM) approaches) on bilateral MR images for automatic 3D bone segmentation in the hip region (proximal femur and innominate bone). Bilateral MR images of the hip joints were acquired at 3T from 30 volunteers. Image sequences included water-excitation dual echo stead state (FOV 38.6 × 24.1 cm, matrix 576 × 360, thickness 0.61 mm) in all subjects and multi-echo data image combination (FOV 37.6 × 23.5 cm, matrix 576 × 360, thickness 0.70 mm) for a subset of eight subjects. Following manual segmentation of femoral (head-neck, proximal-shaft) and innominate (ilium+ischium+pubis) bone, automated bone segmentation proceeded via two approaches: (1) multi-atlas segmentation incorporating non-rigid registration and (2) an advanced ASM-based scheme. Mean inter- and intra-rater reliability Dice's similarity coefficients (DSC) for manual segmentation of femoral and innominate bone were (0.970, 0.963) and (0.971, 0.965). Compared with manual data, mean DSC values for femoral and innominate bone volumes using automated multi-atlas and ASM-based methods were (0.950, 0.922) and (0.946, 0.917), respectively. Both approaches delivered accurate (high DSC values) segmentation results; notably, ASM data were generated in substantially less computational time (12 min versus 10 h). Both automated algorithms provided accurate 3D bone volumetric descriptions for MR-based measures in the hip region. The highly computational efficient ASM-based approach is more likely suitable for future clinical applications such as extracting bone-cartilage interfaces for potential cartilage segmentation.

  6. Automated bone segmentation from large field of view 3D MR images of the hip joint.

    PubMed

    Xia, Ying; Fripp, Jurgen; Chandra, Shekhar S; Schwarz, Raphael; Engstrom, Craig; Crozier, Stuart

    2013-10-21

    Accurate bone segmentation in the hip joint region from magnetic resonance (MR) images can provide quantitative data for examining pathoanatomical conditions such as femoroacetabular impingement through to varying stages of osteoarthritis to monitor bone and associated cartilage morphometry. We evaluate two state-of-the-art methods (multi-atlas and active shape model (ASM) approaches) on bilateral MR images for automatic 3D bone segmentation in the hip region (proximal femur and innominate bone). Bilateral MR images of the hip joints were acquired at 3T from 30 volunteers. Image sequences included water-excitation dual echo stead state (FOV 38.6 × 24.1 cm, matrix 576 × 360, thickness 0.61 mm) in all subjects and multi-echo data image combination (FOV 37.6 × 23.5 cm, matrix 576 × 360, thickness 0.70 mm) for a subset of eight subjects. Following manual segmentation of femoral (head-neck, proximal-shaft) and innominate (ilium+ischium+pubis) bone, automated bone segmentation proceeded via two approaches: (1) multi-atlas segmentation incorporating non-rigid registration and (2) an advanced ASM-based scheme. Mean inter- and intra-rater reliability Dice's similarity coefficients (DSC) for manual segmentation of femoral and innominate bone were (0.970, 0.963) and (0.971, 0.965). Compared with manual data, mean DSC values for femoral and innominate bone volumes using automated multi-atlas and ASM-based methods were (0.950, 0.922) and (0.946, 0.917), respectively. Both approaches delivered accurate (high DSC values) segmentation results; notably, ASM data were generated in substantially less computational time (12 min versus 10 h). Both automated algorithms provided accurate 3D bone volumetric descriptions for MR-based measures in the hip region. The highly computational efficient ASM-based approach is more likely suitable for future clinical applications such as extracting bone-cartilage interfaces for potential cartilage segmentation.

  7. A person-centred segmentation study in elderly care: towards efficient demand-driven care.

    PubMed

    Eissens van der Laan, M R; van Offenbeek, M A G; Broekhuis, H; Slaets, J P J

    2014-07-01

    Providing patients with more person-centred care without increasing costs is a key challenge in healthcare. A relevant but often ignored hindrance to delivering person-centred care is that the current segmentation of the population and the associated organization of healthcare supply are based on diseases. A person-centred segmentation, i.e., based on persons' own experienced difficulties in fulfilling needs, is an elementary but often overlooked first step in developing efficient demand-driven care. This paper describes a person-centred segmentation study of elderly, a large and increasing target group confronted with heterogeneous and often interrelated difficulties in their functioning. In twenty-five diverse healthcare and welfare organizations as well as elderly associations in the Netherlands, data were collected on the difficulties in biopsychosocial functioning experienced by 2019 older adults. Data were collected between March 2010 and January 2011 and sampling took place based on their (temporarily) living conditions. Factor Mixture Model was conducted to categorize the respondents into segments with relatively similar experienced difficulties concerning their functioning. First, the analyses show that older adults can be empirically categorized into five meaningful segments: feeling vital; difficulties with psychosocial coping; physical and mobility complaints; difficulties experienced in multiple domains; and feeling extremely frail. The categorization seems robust as it was replicated in two population-based samples in the Netherlands. The segmentation's usefulness is discussed and illustrated through an evaluation of the alignment between a segment's unfulfilled biopsychosocial needs and current healthcare utilization. The set of person-centred segmentation variables provides healthcare providers the option to perform a more comprehensive first triage step than only a disease-based one. The outcomes of this first step could guide a focused and, therefore, more efficient second triage step. On a local or regional level, this person-centred segmentation provides input information to policymakers and care providers for the demand-driven allocation of resources. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Effects of Strike-Slip Fault Segmentation on Earthquake Energy and Seismic Hazard

    NASA Astrophysics Data System (ADS)

    Madden, E. H.; Cooke, M. L.; Savage, H. M.; McBeck, J.

    2014-12-01

    Many major strike-slip faults are segmented along strike, including those along plate boundaries in California and Turkey. Failure of distinct fault segments at depth may be the source of multiple pulses of seismic radiation observed for single earthquakes. However, how and when segmentation affects fault behavior and energy release is the basis of many outstanding questions related to the physics of faulting and seismic hazard. These include the probability for a single earthquake to rupture multiple fault segments and the effects of segmentation on earthquake magnitude, radiated seismic energy, and ground motions. Using numerical models, we quantify components of the earthquake energy budget, including the tectonic work acting externally on the system, the energy of internal rock strain, the energy required to overcome fault strength and initiate slip, the energy required to overcome frictional resistance during slip, and the radiated seismic energy. We compare the energy budgets of systems of two en echelon fault segments with various spacing that include both releasing and restraining steps. First, we allow the fault segments to fail simultaneously and capture the effects of segmentation geometry on the earthquake energy budget and on the efficiency with which applied displacement is accommodated. Assuming that higher efficiency correlates with higher probability for a single, larger earthquake, this approach has utility for assessing the seismic hazard of segmented faults. Second, we nucleate slip along a weak portion of one fault segment and let the quasi-static rupture propagate across the system. Allowing fractures to form near faults in these models shows that damage develops within releasing steps and promotes slip along the second fault, while damage develops outside of restraining steps and can prohibit slip along the second fault. Work is consumed in both the propagation of and frictional slip along these new fractures, impacting the energy available for further slip and for subsequent earthquakes. This suite of models reveals that efficiency may be a useful tool for determining the relative seismic hazard of different segmented fault systems, while accounting for coseismic damage zone production is critical in assessing fault interactions and the associated energy budgets of specific systems.

  9. Efficient fuzzy C-means architecture for image segmentation.

    PubMed

    Li, Hui-Ya; Hwang, Wen-Jyi; Chang, Chia-Yen

    2011-01-01

    This paper presents a novel VLSI architecture for image segmentation. The architecture is based on the fuzzy c-means algorithm with spatial constraint for reducing the misclassification rate. In the architecture, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. In addition, an efficient pipelined circuit is used for the updating process for accelerating the computational speed. Experimental results show that the the proposed circuit is an effective alternative for real-time image segmentation with low area cost and low misclassification rate.

  10. Segmentation of nuclear images in automated cervical cancer screening

    NASA Astrophysics Data System (ADS)

    Dadeshidze, Vladimir; Olsson, Lars J.; Domanik, Richard A.

    1995-08-01

    This paper describes an efficient method of segmenting cell nuclei from complex scenes based upon the use of adaptive region growing in conjuction with nucleus-specific filters. Results of segmenting potentially abnormal (cancer or neoplastic) cell nuclei in Papanicolaou smears from 0.8 square micrometers resolution images are also presented.

  11. Technical developments in international satellite business services

    NASA Astrophysics Data System (ADS)

    Tan, P. P.

    At the conception of International Satellite Business Services (ISBS), it was a primary objective to provide flexibility for accommodating a variety of service requirements which might be established by mutual agreement between users. The design guidelines are to ensure that the space segment is efficiently utilized, while other satellite services are protected from interference. Other considerations are related to an acceptable earth segment cost, maximum connectivity in worldwide services, the capability of growth and a reasonably smooth transition into future systems, and the maintenance of high performance objectives. Attention is given to a system overview, the characteristics of satellites for ISBS, and technological developments with some application possibilities for ISBS.

  12. A Conserved Asparagine Residue in Transmembrane Segment 1 (TM1) of Serotonin Transporter Dictates Chloride-coupled Neurotransmitter Transport*

    PubMed Central

    Henry, L. Keith; Iwamoto, Hideki; Field, Julie R.; Kaufmann, Kristian; Dawson, Eric S.; Jacobs, Miriam T.; Adams, Chelsea; Felts, Bruce; Zdravkovic, Igor; Armstrong, Vanessa; Combs, Steven; Solis, Ernesto; Rudnick, Gary; Noskov, Sergei Y.; DeFelice, Louis J.; Meiler, Jens; Blakely, Randy D.

    2011-01-01

    Na+- and Cl−-dependent uptake of neurotransmitters via transporters of the SLC6 family, including the human serotonin transporter (SLC6A4), is critical for efficient synaptic transmission. Although residues in the human serotonin transporter involved in direct Cl− coordination of human serotonin transport have been identified, the role of Cl− in the transport mechanism remains unclear. Through a combination of mutagenesis, chemical modification, substrate and charge flux measurements, and molecular modeling studies, we reveal an unexpected role for the highly conserved transmembrane segment 1 residue Asn-101 in coupling Cl− binding to concentrative neurotransmitter uptake. PMID:21730057

  13. Easi-CRISPR for creating knock-in and conditional knockout mouse models using long ssDNA donors.

    PubMed

    Miura, Hiromi; Quadros, Rolen M; Gurumurthy, Channabasavaiah B; Ohtsuka, Masato

    2018-01-01

    CRISPR/Cas9-based genome editing can easily generate knockout mouse models by disrupting the gene sequence, but its efficiency for creating models that require either insertion of exogenous DNA (knock-in) or replacement of genomic segments is very poor. The majority of mouse models used in research involve knock-in (reporters or recombinases) or gene replacement (e.g., conditional knockout alleles containing exons flanked by LoxP sites). A few methods for creating such models have been reported that use double-stranded DNA as donors, but their efficiency is typically 1-10% and therefore not suitable for routine use. We recently demonstrated that long single-stranded DNAs (ssDNAs) serve as very efficient donors, both for insertion and for gene replacement. We call this method efficient additions with ssDNA inserts-CRISPR (Easi-CRISPR) because it is a highly efficient technology (efficiency is typically 30-60% and reaches as high as 100% in some cases). The protocol takes ∼2 months to generate the founder mice.

  14. A hybrid method for airway segmentation and automated measurement of bronchial wall thickness on CT.

    PubMed

    Xu, Ziyue; Bagci, Ulas; Foster, Brent; Mansoor, Awais; Udupa, Jayaram K; Mollura, Daniel J

    2015-08-01

    Inflammatory and infectious lung diseases commonly involve bronchial airway structures and morphology, and these abnormalities are often analyzed non-invasively through high resolution computed tomography (CT) scans. Assessing airway wall surfaces and the lumen are of great importance for diagnosing pulmonary diseases. However, obtaining high accuracy from a complete 3-D airway tree structure can be quite challenging. The airway tree structure has spiculated shapes with multiple branches and bifurcation points as opposed to solid single organ or tumor segmentation tasks in other applications, hence, it is complex for manual segmentation as compared with other tasks. For computerized methods, a fundamental challenge in airway tree segmentation is the highly variable intensity levels in the lumen area, which often causes a segmentation method to leak into adjacent lung parenchyma through blurred airway walls or soft boundaries. Moreover, outer wall definition can be difficult due to similar intensities of the airway walls and nearby structures such as vessels. In this paper, we propose a computational framework to accurately quantify airways through (i) a novel hybrid approach for precise segmentation of the lumen, and (ii) two novel methods (a spatially constrained Markov random walk method (pseudo 3-D) and a relative fuzzy connectedness method (3-D)) to estimate the airway wall thickness. We evaluate the performance of our proposed methods in comparison with mostly used algorithms using human chest CT images. Our results demonstrate that, on publicly available data sets and using standard evaluation criteria, the proposed airway segmentation method is accurate and efficient as compared with the state-of-the-art methods, and the airway wall estimation algorithms identified the inner and outer airway surfaces more accurately than the most widely applied methods, namely full width at half maximum and phase congruency. Copyright © 2015. Published by Elsevier B.V.

  15. Highlight summarization in golf videos using audio signals

    NASA Astrophysics Data System (ADS)

    Kim, Hyoung-Gook; Kim, Jin Young

    2008-01-01

    In this paper, we present an automatic summarization of highlights in golf videos based on audio information alone without video information. The proposed highlight summarization system is carried out based on semantic audio segmentation and detection on action units from audio signals. Studio speech, field speech, music, and applause are segmented by means of sound classification. Swing is detected by the methods of impulse onset detection. Sounds like swing and applause form a complete action unit, while studio speech and music parts are used to anchor the program structure. With the advantage of highly precise detection of applause, highlights are extracted effectively. Our experimental results obtain high classification precision on 18 golf games. It proves that the proposed system is very effective and computationally efficient to apply the technology to embedded consumer electronic devices.

  16. Multi-atlas learner fusion: An efficient segmentation approach for large-scale data.

    PubMed

    Asman, Andrew J; Huo, Yuankai; Plassard, Andrew J; Landman, Bennett A

    2015-12-01

    We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Multineuronal vectorization is more efficient than time-segmental vectorization for information extraction from neuronal activities in the inferior temporal cortex.

    PubMed

    Kaneko, Hidekazu; Tamura, Hiroshi; Tate, Shunta; Kawashima, Takahiro; Suzuki, Shinya S; Fujita, Ichiro

    2010-08-01

    In order for patients with disabilities to control assistive devices with their own neural activity, multineuronal spike trains must be efficiently decoded because only limited computational resources can be used to generate prosthetic control signals in portable real-time applications. In this study, we compare the abilities of two vectorizing procedures (multineuronal and time-segmental) to extract information from spike trains during the same total neuron-seconds. In the multineuronal vectorizing procedure, we defined a response vector whose components represented the spike counts of one to five neurons. In the time-segmental vectorizing procedure, a response vector consisted of components representing a neuron's spike counts for one to five time-segment(s) of a response period of 1 s. Spike trains were recorded from neurons in the inferior temporal cortex of monkeys presented with visual stimuli. We examined whether the amount of information of the visual stimuli carried by these neurons differed between the two vectorizing procedures. The amount of information calculated with the multineuronal vectorizing procedure, but not the time-segmental vectorizing procedure, significantly increased with the dimensions of the response vector. We conclude that the multineuronal vectorizing procedure is superior to the time-segmental vectorizing procedure in efficiently extracting information from neuronal signals. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  18. An Efficient Implementation of Deep Convolutional Neural Networks for MRI Segmentation.

    PubMed

    Hoseini, Farnaz; Shahbahrami, Asadollah; Bayat, Peyman

    2018-02-27

    Image segmentation is one of the most common steps in digital image processing, classifying a digital image into different segments. The main goal of this paper is to segment brain tumors in magnetic resonance images (MRI) using deep learning. Tumors having different shapes, sizes, brightness and textures can appear anywhere in the brain. These complexities are the reasons to choose a high-capacity Deep Convolutional Neural Network (DCNN) containing more than one layer. The proposed DCNN contains two parts: architecture and learning algorithms. The architecture and the learning algorithms are used to design a network model and to optimize parameters for the network training phase, respectively. The architecture contains five convolutional layers, all using 3 × 3 kernels, and one fully connected layer. Due to the advantage of using small kernels with fold, it allows making the effect of larger kernels with smaller number of parameters and fewer computations. Using the Dice Similarity Coefficient metric, we report accuracy results on the BRATS 2016, brain tumor segmentation challenge dataset, for the complete, core, and enhancing regions as 0.90, 0.85, and 0.84 respectively. The learning algorithm includes the task-level parallelism. All the pixels of an MR image are classified using a patch-based approach for segmentation. We attain a good performance and the experimental results show that the proposed DCNN increases the segmentation accuracy compared to previous techniques.

  19. The segmentation of bones in pelvic CT images based on extraction of key frames.

    PubMed

    Yu, Hui; Wang, Haijun; Shi, Yao; Xu, Ke; Yu, Xuyao; Cao, Yuzhen

    2018-05-22

    Bone segmentation is important in computed tomography (CT) imaging of the pelvis, which assists physicians in the early diagnosis of pelvic injury, in planning operations, and in evaluating the effects of surgical treatment. This study developed a new algorithm for the accurate, fast, and efficient segmentation of the pelvis. The proposed method consists of two main parts: the extraction of key frames and the segmentation of pelvic CT images. Key frames were extracted based on pixel difference, mutual information and normalized correlation coefficient. In the pelvis segmentation phase, skeleton extraction from CT images and a marker-based watershed algorithm were combined to segment the pelvis. To meet the requirements of clinical application, physician's judgment is needed. Therefore the proposed methodology is semi-automated. In this paper, 5 sets of CT data were used to test the overlapping area, and 15 CT images were used to determine the average deviation distance. The average overlapping area of the 5 sets was greater than 94%, and the minimum average deviation distance was approximately 0.58 pixels. In addition, the key frame extraction efficiency and the running time of the proposed method were evaluated on 20 sets of CT data. For each set, approximately 13% of the images were selected as key frames, and the average processing time was approximately 2 min (the time for manual marking was not included). The proposed method is able to achieve accurate, fast, and efficient segmentation of pelvic CT image sequences. Segmentation results not only provide an important reference for early diagnosis and decisions regarding surgical procedures, they also offer more accurate data for medical image registration, recognition and 3D reconstruction.

  20. Bayesian segmentation of atrium wall using globally-optimal graph cuts on 3D meshes.

    PubMed

    Veni, Gopalkrishna; Fu, Zhisong; Awate, Suyash P; Whitaker, Ross T

    2013-01-01

    Efficient segmentation of the left atrium (LA) wall from delayed enhancement MRI is challenging due to inconsistent contrast, combined with noise, and high variation in atrial shape and size. We present a surface-detection method that is capable of extracting the atrial wall by computing an optimal a-posteriori estimate. This estimation is done on a set of nested meshes, constructed from an ensemble of segmented training images, and graph cuts on an associated multi-column, proper-ordered graph. The graph/mesh is a part of a template/model that has an associated set of learned intensity features. When this mesh is overlaid onto a test image, it produces a set of costs which lead to an optimal segmentation. The 3D mesh has an associated weighted, directed multi-column graph with edges that encode smoothness and inter-surface penalties. Unlike previous graph-cut methods that impose hard constraints on the surface properties, the proposed method follows from a Bayesian formulation resulting in soft penalties on spatial variation of the cuts through the mesh. The novelty of this method also lies in the construction of proper-ordered graphs on complex shapes for choosing among distinct classes of base shapes for automatic LA segmentation. We evaluate the proposed segmentation framework on simulated and clinical cardiac MRI.

  1. Dynamic thermal characteristics of heat pipe via segmented thermal resistance model for electric vehicle battery cooling

    NASA Astrophysics Data System (ADS)

    Liu, Feifei; Lan, Fengchong; Chen, Jiqing

    2016-07-01

    Heat pipe cooling for battery thermal management systems (BTMSs) in electric vehicles (EVs) is growing due to its advantages of high cooling efficiency, compact structure and flexible geometry. Considering the transient conduction, phase change and uncertain thermal conditions in a heat pipe, it is challenging to obtain the dynamic thermal characteristics accurately in such complex heat and mass transfer process. In this paper, a ;segmented; thermal resistance model of a heat pipe is proposed based on thermal circuit method. The equivalent conductivities of different segments, viz. the evaporator and condenser of pipe, are used to determine their own thermal parameters and conditions integrated into the thermal model of battery for a complete three-dimensional (3D) computational fluid dynamics (CFD) simulation. The proposed ;segmented; model shows more precise than the ;non-segmented; model by the comparison of simulated and experimental temperature distribution and variation of an ultra-thin micro heat pipe (UMHP) battery pack, and has less calculation error to obtain dynamic thermal behavior for exact thermal design, management and control of heat pipe BTMSs. Using the ;segmented; model, the cooling effect of the UMHP pack with different natural/forced convection and arrangements is predicted, and the results correspond well to the tests.

  2. Combining watershed and graph cuts methods to segment organs at risk in radiotherapy

    NASA Astrophysics Data System (ADS)

    Dolz, Jose; Kirisli, Hortense A.; Viard, Romain; Massoptier, Laurent

    2014-03-01

    Computer-aided segmentation of anatomical structures in medical images is a valuable tool for efficient radiation therapy planning (RTP). As delineation errors highly affect the radiation oncology treatment, it is crucial to delineate geometric structures accurately. In this paper, a semi-automatic segmentation approach for computed tomography (CT) images, based on watershed and graph-cuts methods, is presented. The watershed pre-segmentation groups small areas of similar intensities in homogeneous labels, which are subsequently used as input for the graph-cuts algorithm. This methodology does not require of prior knowledge of the structure to be segmented; even so, it performs well with complex shapes and low intensity. The presented method also allows the user to add foreground and background strokes in any of the three standard orthogonal views - axial, sagittal or coronal - making the interaction with the algorithm easy and fast. Hence, the segmentation information is propagated within the whole volume, providing a spatially coherent result. The proposed algorithm has been evaluated using 9 CT volumes, by comparing its segmentation performance over several organs - lungs, liver, spleen, heart and aorta - to those of manual delineation from experts. A Dicés coefficient higher than 0.89 was achieved in every case. That demonstrates that the proposed approach works well for all the anatomical structures analyzed. Due to the quality of the results, the introduction of the proposed approach in the RTP process will be a helpful tool for organs at risk (OARs) segmentation.

  3. Lung Segmentation Refinement based on Optimal Surface Finding Utilizing a Hybrid Desktop/Virtual Reality User Interface

    PubMed Central

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R.

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. PMID:23415254

  4. An ablative pulsed plasma thruster with a segmented anode

    NASA Astrophysics Data System (ADS)

    Zhang, Zhe; Ren, Junxue; Tang, Haibin; Ling, William Yeong Liang; York, Thomas M.

    2018-01-01

    An ablative pulsed plasma thruster (APPT) design with a ‘segmented anode’ is proposed in this paper. We aim to examine the effect that this asymmetric electrode configuration (a normal cathode and a segmented anode) has on the performance of an APPT. The magnetic field of the discharge arc, plasma density in the exit plume, impulse bit, and thrust efficiency were studied using a magnetic probe, Langmuir probe, thrust stand, and mass bit measurements, respectively. When compared with conventional symmetric parallel electrodes, the segmented anode APPT shows an improvement in the impulse bit of up to 28%. The thrust efficiency is also improved by 49% (from 5.3% to 7.9% for conventional and segmented designs, respectively). Long-exposure broadband emission images of the discharge morphology show that compared with a normal anode, a segmented anode results in clear differences in the luminous discharge morphology and better collimation of the plasma. The magnetic probe data indicate that the segmented anode APPT exhibits a higher current density in the discharge arc. Furthermore, Langmuir probe data collected from the central exit plane show that the peak electron density is 75% higher than with conventional parallel electrodes. These results are believed to be fundamental to the physical mechanisms behind the increased impulse bit of an APPT with a segmented electrode.

  5. Towards online iris and periocular recognition under relaxed imaging constraints.

    PubMed

    Tan, Chun-Wei; Kumar, Ajay

    2013-10-01

    Online iris recognition using distantly acquired images in a less imaging constrained environment requires the development of a efficient iris segmentation approach and recognition strategy that can exploit multiple features available for the potential identification. This paper presents an effective solution toward addressing such a problem. The developed iris segmentation approach exploits a random walker algorithm to efficiently estimate coarsely segmented iris images. These coarsely segmented iris images are postprocessed using a sequence of operations that can effectively improve the segmentation accuracy. The robustness of the proposed iris segmentation approach is ascertained by providing comparison with other state-of-the-art algorithms using publicly available UBIRIS.v2, FRGC, and CASIA.v4-distance databases. Our experimental results achieve improvement of 9.5%, 4.3%, and 25.7% in the average segmentation accuracy, respectively, for the UBIRIS.v2, FRGC, and CASIA.v4-distance databases, as compared with most competing approaches. We also exploit the simultaneously extracted periocular features to achieve significant performance improvement. The joint segmentation and combination strategy suggest promising results and achieve average improvement of 132.3%, 7.45%, and 17.5% in the recognition performance, respectively, from the UBIRIS.v2, FRGC, and CASIA.v4-distance databases, as compared with the related competing approaches.

  6. - and Graph-Based Point Cloud Segmentation of 3d Scenes Using Perceptual Grouping Laws

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Hoegner, L.; Tuttas, S.; Stilla, U.

    2017-05-01

    Segmentation is the fundamental step for recognizing and extracting objects from point clouds of 3D scene. In this paper, we present a strategy for point cloud segmentation using voxel structure and graph-based clustering with perceptual grouping laws, which allows a learning-free and completely automatic but parametric solution for segmenting 3D point cloud. To speak precisely, two segmentation methods utilizing voxel and supervoxel structures are reported and tested. The voxel-based data structure can increase efficiency and robustness of the segmentation process, suppressing the negative effect of noise, outliers, and uneven points densities. The clustering of voxels and supervoxel is carried out using graph theory on the basis of the local contextual information, which commonly conducted utilizing merely pairwise information in conventional clustering algorithms. By the use of perceptual laws, our method conducts the segmentation in a pure geometric way avoiding the use of RGB color and intensity information, so that it can be applied to more general applications. Experiments using different datasets have demonstrated that our proposed methods can achieve good results, especially for complex scenes and nonplanar surfaces of objects. Quantitative comparisons between our methods and other representative segmentation methods also confirms the effectiveness and efficiency of our proposals.

  7. An Approach to a Comprehensive Test Framework for Analysis and Evaluation of Text Line Segmentation Algorithms

    PubMed Central

    Brodic, Darko; Milivojevic, Dragan R.; Milivojevic, Zoran N.

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures. PMID:22164106

  8. An approach to a comprehensive test framework for analysis and evaluation of text line segmentation algorithms.

    PubMed

    Brodic, Darko; Milivojevic, Dragan R; Milivojevic, Zoran N

    2011-01-01

    The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.

  9. Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks

    PubMed Central

    Ibragimov, Bulat; Xing, Lei

    2017-01-01

    Purpose Accurate segmentation of organs-at-risks (OARs) is the key step for efficient planning of radiation therapy for head and neck (HaN) cancer treatment. In the work, we proposed the first deep learning-based algorithm, for segmentation of OARs in HaN CT images, and compared its performance against state-of-the-art automated segmentation algorithms, commercial software and inter-observer variability. Methods Convolutional neural networks (CNNs) – a concept from the field of deep learning – were used to study consistent intensity patterns of OARs from training CT images and to segment the OAR in a previously unseen test CT image. For CNN training, we extracted a representative number of positive intensity patches around voxels that belong to the OAR of interest in training CT images, and negative intensity patches around voxels that belong to the surrounding structures. These patches then passed through a sequence of CNN layers that captured local image features such as corners, end-points and edges, and combined them into more complex high-order features that can efficiently describe the OAR. The trained network was applied to classify voxels in a region of interest in the test image where the corresponding OAR is expected to be located. We then smoothed the obtained classification results by using Markov random fields algorithm. We finally extracted the largest connected component of the smoothed voxels classified as the OAR by CNN, performed dilate-erode operations to remov cavities of the component, which resulted in segmentation of the OAR in the test image. Results The performance of CNNs was validated on segmentation of spinal cord, mandible, parotid glands, submandibular glands, larynx, pharynx, eye globes, optic nerves and optic chiasm using 50 CT images. The obtained segmentation results varied from 37.4% Dice coefficient (DSC) for chiasm to 89.5% DSC for mandible. We also analyzed the performance of state-of-the-art algorithms and commercial software reported in the literature, and observed that CNNs demonstrate similar or superior performance on segmentation of spinal cord, mandible, parotid glands, larynx, pharynx, eye globes and optic nerves, but inferior performance on segmentation of submandibular glands and optic chiasm. Conclusion We concluded that convolution neural networks can accurately segment most of OARs using a representative database of 50 HaN CT images. At the same time, inclusion of additional information, e.g. MR images, may be beneficial for some OARs with poorly-visible boundaries. PMID:28205307

  10. Automatic Sea Bird Detection from High Resolution Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Mader, S.; Grenzdörffer, G. J.

    2016-06-01

    Great efforts are presently taken in the scientific community to develop computerized and (fully) automated image processing methods allowing for an efficient and automatic monitoring of sea birds and marine mammals in ever-growing amounts of aerial imagery. Currently the major part of the processing, however, is still conducted by especially trained professionals, visually examining the images and detecting and classifying the requested subjects. This is a very tedious task, particularly when the rate of void images regularly exceeds the mark of 90%. In the content of this contribution we will present our work aiming to support the processing of aerial images by modern methods from the field of image processing. We will especially focus on the combination of local, region-based feature detection and piecewise global image segmentation for automatic detection of different sea bird species. Large image dimensions resulting from the use of medium and large-format digital cameras in aerial surveys inhibit the applicability of image processing methods based on global operations. In order to efficiently handle those image sizes and to nevertheless take advantage of globally operating segmentation algorithms, we will describe the combined usage of a simple performant feature detector based on local operations on the original image with a complex global segmentation algorithm operating on extracted sub-images. The resulting exact segmentation of possible candidates then serves as a basis for the determination of feature vectors for subsequent elimination of false candidates and for classification tasks.

  11. Algorithm for protecting light-trees in survivable mesh wavelength-division-multiplexing networks

    NASA Astrophysics Data System (ADS)

    Luo, Hongbin; Li, Lemin; Yu, Hongfang

    2006-12-01

    Wavelength-division-multiplexing (WDM) technology is expected to facilitate bandwidth-intensive multicast applications such as high-definition television. A single fiber cut in a WDM mesh network, however, can disrupt the dissemination of information to several destinations on a light-tree based multicast session. Thus it is imperative to protect multicast sessions by reserving redundant resources. We propose a novel and efficient algorithm for protecting light-trees in survivable WDM mesh networks. The algorithm is called segment-based protection with sister node first (SSNF), whose basic idea is to protect a light-tree using a set of backup segments with a higher priority to protect the segments from a branch point to its children (sister nodes). The SSNF algorithm differs from the segment protection scheme proposed in the literature in how the segments are identified and protected. Our objective is to minimize the network resources used for protecting each primary light-tree such that the blocking probability can be minimized. To verify the effectiveness of the SSNF algorithm, we conduct extensive simulation experiments. The simulation results demonstrate that the SSNF algorithm outperforms existing algorithms for the same problem.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, R. O.; Burke, J. T.; Casperson, R. J.

    Hyperion is a new high-efficiency charged-particle γ-ray detector array which consists of a segmented silicon telescope for charged-particle detection and up to fourteen high-purity germanium clover detectors for the detection of coincident γ rays. The array will be used in nuclear physics measurements and Stockpile Stewardship studies and replaces the STARLiTeR array. In conclusion, this article discusses the features of the array and presents data collected with the array in the commissioning experiment.

  13. Optimized efficiency in InP nanowire solar cells with accurate 1D analysis

    NASA Astrophysics Data System (ADS)

    Chen, Yang; Kivisaari, Pyry; Pistol, Mats-Erik; Anttu, Nicklas

    2018-01-01

    Semiconductor nanowire arrays are a promising candidate for next generation solar cells due to enhanced absorption and reduced material consumption. However, to optimize their performance, time consuming three-dimensional (3D) opto-electronics modeling is usually performed. Here, we develop an accurate one-dimensional (1D) modeling method for the analysis. The 1D modeling is about 400 times faster than 3D modeling and allows direct application of concepts from planar pn-junctions on the analysis of nanowire solar cells. We show that the superposition principle can break down in InP nanowires due to strong surface recombination in the depletion region, giving rise to an IV-behavior similar to that with low shunt resistance. Importantly, we find that the open-circuit voltage of nanowire solar cells is typically limited by contact leakage. Therefore, to increase the efficiency, we have investigated the effect of high-bandgap GaP carrier-selective contact segments at the top and bottom of the InP nanowire and we find that GaP contact segments improve the solar cell efficiency. Next, we discuss the merit of p-i-n and p-n junction concepts in nanowire solar cells. With GaP carrier selective top and bottom contact segments in the InP nanowire array, we find that a p-n junction design is superior to a p-i-n junction design. We predict a best efficiency of 25% for a surface recombination velocity of 4500 cm s-1, corresponding to a non-radiative lifetime of 1 ns in p-n junction cells. The developed 1D model can be used for general modeling of axial p-n and p-i-n junctions in semiconductor nanowires. This includes also LED applications and we expect faster progress in device modeling using our method.

  14. Optimized efficiency in InP nanowire solar cells with accurate 1D analysis.

    PubMed

    Chen, Yang; Kivisaari, Pyry; Pistol, Mats-Erik; Anttu, Nicklas

    2018-01-26

    Semiconductor nanowire arrays are a promising candidate for next generation solar cells due to enhanced absorption and reduced material consumption. However, to optimize their performance, time consuming three-dimensional (3D) opto-electronics modeling is usually performed. Here, we develop an accurate one-dimensional (1D) modeling method for the analysis. The 1D modeling is about 400 times faster than 3D modeling and allows direct application of concepts from planar pn-junctions on the analysis of nanowire solar cells. We show that the superposition principle can break down in InP nanowires due to strong surface recombination in the depletion region, giving rise to an IV-behavior similar to that with low shunt resistance. Importantly, we find that the open-circuit voltage of nanowire solar cells is typically limited by contact leakage. Therefore, to increase the efficiency, we have investigated the effect of high-bandgap GaP carrier-selective contact segments at the top and bottom of the InP nanowire and we find that GaP contact segments improve the solar cell efficiency. Next, we discuss the merit of p-i-n and p-n junction concepts in nanowire solar cells. With GaP carrier selective top and bottom contact segments in the InP nanowire array, we find that a p-n junction design is superior to a p-i-n junction design. We predict a best efficiency of 25% for a surface recombination velocity of 4500 cm s -1 , corresponding to a non-radiative lifetime of 1 ns in p-n junction cells. The developed 1D model can be used for general modeling of axial p-n and p-i-n junctions in semiconductor nanowires. This includes also LED applications and we expect faster progress in device modeling using our method.

  15. Segmentation and classification of cell cycle phases in fluorescence imaging.

    PubMed

    Ersoy, Ilker; Bunyak, Filiz; Chagin, Vadim; Cardoso, M Christina; Palaniappan, Kannappan

    2009-01-01

    Current chemical biology methods for studying spatiotemporal correlation between biochemical networks and cell cycle phase progression in live-cells typically use fluorescence-based imaging of fusion proteins. Stable cell lines expressing fluorescently tagged protein GFP-PCNA produce rich, dynamically varying sub-cellular foci patterns characterizing the cell cycle phases, including the progress during the S-phase. Variable fluorescence patterns, drastic changes in SNR, shape and position changes and abundance of touching cells require sophisticated algorithms for reliable automatic segmentation and cell cycle classification. We extend the recently proposed graph partitioning active contours (GPAC) for fluorescence-based nucleus segmentation using regional density functions and dramatically improve its efficiency, making it scalable for high content microscopy imaging. We utilize surface shape properties of GFP-PCNA intensity field to obtain descriptors of foci patterns and perform automated cell cycle phase classification, and give quantitative performance by comparing our results to manually labeled data.

  16. Volumetric glioma quantification: comparison of manual and semi-automatic tumor segmentation for the quantification of tumor growth.

    PubMed

    Odland, Audun; Server, Andres; Saxhaug, Cathrine; Breivik, Birger; Groote, Rasmus; Vardal, Jonas; Larsson, Christopher; Bjørnerud, Atle

    2015-11-01

    Volumetric magnetic resonance imaging (MRI) is now widely available and routinely used in the evaluation of high-grade gliomas (HGGs). Ideally, volumetric measurements should be included in this evaluation. However, manual tumor segmentation is time-consuming and suffers from inter-observer variability. Thus, tools for semi-automatic tumor segmentation are needed. To present a semi-automatic method (SAM) for segmentation of HGGs and to compare this method with manual segmentation performed by experts. The inter-observer variability among experts manually segmenting HGGs using volumetric MRIs was also examined. Twenty patients with HGGs were included. All patients underwent surgical resection prior to inclusion. Each patient underwent several MRI examinations during and after adjuvant chemoradiation therapy. Three experts performed manual segmentation. The results of tumor segmentation by the experts and by the SAM were compared using Dice coefficients and kappa statistics. A relatively close agreement was seen among two of the experts and the SAM, while the third expert disagreed considerably with the other experts and the SAM. An important reason for this disagreement was a different interpretation of contrast enhancement as either surgically-induced or glioma-induced. The time required for manual tumor segmentation was an average of 16 min per scan. Editing of the tumor masks produced by the SAM required an average of less than 2 min per sample. Manual segmentation of HGG is very time-consuming and using the SAM could increase the efficiency of this process. However, the accuracy of the SAM ultimately depends on the expert doing the editing. Our study confirmed a considerable inter-observer variability among experts defining tumor volume from volumetric MRIs. © The Foundation Acta Radiologica 2014.

  17. Vision Sensor-Based Road Detection for Field Robot Navigation

    PubMed Central

    Lu, Keyu; Li, Jian; An, Xiangjing; He, Hangen

    2015-01-01

    Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA)-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF) framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art. PMID:26610514

  18. Recent advances in quantitative analysis of fluid interfaces in multiphase fluid flow measured by synchrotron-based x-ray microtomography

    NASA Astrophysics Data System (ADS)

    Schlueter, S.; Sheppard, A.; Wildenschild, D.

    2013-12-01

    Imaging of fluid interfaces in three-dimensional porous media via x-ray microtomography is an efficient means to test thermodynamically derived predictions on the relationship between capillary pressure, fluid saturation and specific interfacial area (Pc-Sw-Anw) in partially saturated porous media. Various experimental studies exist to date that validate the uniqueness of the Pc-Sw-Anw relationship under static conditions and with current technological progress direct imaging of moving interfaces under dynamic conditions is also becoming available. Image acquisition and subsequent image processing currently involves many steps each prone to operator bias, like merging different scans of the same sample obtained at different beam energies into a single image or the generation of isosurfaces from the segmented multiphase image on which the interface properties are usually calculated. We demonstrate that with recent advancements in (i) image enhancement methods, (ii) multiphase segmentation methods and (iii) methods of structural analysis we can considerably decrease the time and cost of image acquisition and the uncertainty associated with the measurement of interfacial properties. In particular, we highlight three notorious problems in multiphase image processing and provide efficient solutions for each: (i) Due to noise, partial volume effects, and imbalanced volume fractions, automated histogram-based threshold detection methods frequently fail. However, these impairments can be mitigated with modern denoising methods, special treatment of gray value edges and adaptive histogram equilization, such that most of the standard methods for threshold detection (Otsu, fuzzy c-means, minimum error, maximum entropy) coincide at the same set of values. (ii) Partial volume effects due to blur may produce apparent water films around solid surfaces that alter the specific fluid-fluid interfacial area (Anw) considerably. In a synthetic test image some local segmentation methods like Bayesian Markov random field, converging active contours and watershed segmentation reduced the error in Anw associated with apparent water films from 21% to 6-11%. (iii) The generation of isosurfaces from the segmented data usually requires a lot of postprocessing in order to smooth the surface and check for consistency errors. This can be avoided by calculating specific interfacial areas directly on the segmented voxel image by means of Minkowski functionals which is highly efficient and less error prone.

  19. Self-assembled micellar aggregates based monomethoxyl poly(ethylene glycol)-b-poly(ε-caprolactone)-b-poly(aminoethyl methacrylate) triblock copolymers as efficient gene delivery vectors.

    PubMed

    Ma, Ming; Li, Feng; Liu, Xiu-hong; Yuan, Zhe-fan; Chen, Fu-jie; Zhuo, Ren-xi

    2010-10-01

    Amphiphilic triblock copolymers monomethoxyl poly(ethylene glycol) (mPEG)-b-poly(ε-caprolactone) (PCL)-b-poly(aminoethyl methacrylate)s (PAMAs) (mPECAs) were synthesized as gene delivery vectors. They exhibited lower cytotoxicity and higher transfection efficiency in COS-7 cells in presence of serum compared to 25 kDa bPEI. The influence of mPEG and PCL segments in mPECAs was evaluated by comparing with corresponding diblock copolymers. The studies showed the incorporation of the hydrophobic PCL segment in triblock copolymers affected the binding capability to pDNA and surface charges of complexes due to the formation of micelles increasing the local charges. The presence of mPEG segment in gene vector decreased the surface charges of the complexes and increased the stability of the complexes in serum because of the steric hindrance effect. It was also found that the combination of PEG and PCL segments into one macromolecule might lead to synergistic effect for better transfection efficiency in serum.

  20. A novel software and conceptual design of the hardware platform for intensity modulated radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Dan; Ruan, Dan; O’Connor, Daniel

    Purpose: To deliver high quality intensity modulated radiotherapy (IMRT) using a novel generalized sparse orthogonal collimators (SOCs), the authors introduce a novel direct aperture optimization (DAO) approach based on discrete rectangular representation. Methods: A total of seven patients—two glioblastoma multiforme, three head & neck (including one with three prescription doses), and two lung—were included. 20 noncoplanar beams were selected using a column generation and pricing optimization method. The SOC is a generalized conventional orthogonal collimators with N leaves in each collimator bank, where N = 1, 2, or 4. SOC degenerates to conventional jaws when N = 1. For SOC-basedmore » IMRT, rectangular aperture optimization (RAO) was performed to optimize the fluence maps using rectangular representation, producing fluence maps that can be directly converted into a set of deliverable rectangular apertures. In order to optimize the dose distribution and minimize the number of apertures used, the overall objective was formulated to incorporate an L2 penalty reflecting the difference between the prescription and the projected doses, and an L1 sparsity regularization term to encourage a low number of nonzero rectangular basis coefficients. The optimization problem was solved using the Chambolle–Pock algorithm, a first-order primal–dual algorithm. Performance of RAO was compared to conventional two-step IMRT optimization including fluence map optimization and direct stratification for multileaf collimator (MLC) segmentation (DMS) using the same number of segments. For the RAO plans, segment travel time for SOC delivery was evaluated for the N = 1, N = 2, and N = 4 SOC designs to characterize the improvement in delivery efficiency as a function of N. Results: Comparable PTV dose homogeneity and coverage were observed between the RAO and the DMS plans. The RAO plans were slightly superior to the DMS plans in sparing critical structures. On average, the maximum and mean critical organ doses were reduced by 1.94% and 1.44% of the prescription dose. The average number of delivery segments was 12.68 segments per beam for both the RAO and DMS plans. The N = 2 and N = 4 SOC designs were, on average, 1.56 and 1.80 times more efficient than the N = 1 SOC design to deliver. The mean aperture size produced by the RAO plans was 3.9 times larger than that of the DMS plans. Conclusions: The DAO and dose domain optimization approach enabled high quality IMRT plans using a low-complexity collimator setup. The dosimetric quality is comparable or slightly superior to conventional MLC-based IMRT plans using the same number of delivery segments. The SOC IMRT delivery efficiency can be significantly improved by increasing the leaf numbers, but the number is still significantly lower than the number of leaves in a typical MLC.« less

  1. A novel software and conceptual design of the hardware platform for intensity modulated radiation therapy.

    PubMed

    Nguyen, Dan; Ruan, Dan; O'Connor, Daniel; Woods, Kaley; Low, Daniel A; Boucher, Salime; Sheng, Ke

    2016-02-01

    To deliver high quality intensity modulated radiotherapy (IMRT) using a novel generalized sparse orthogonal collimators (SOCs), the authors introduce a novel direct aperture optimization (DAO) approach based on discrete rectangular representation. A total of seven patients-two glioblastoma multiforme, three head & neck (including one with three prescription doses), and two lung-were included. 20 noncoplanar beams were selected using a column generation and pricing optimization method. The SOC is a generalized conventional orthogonal collimators with N leaves in each collimator bank, where N = 1, 2, or 4. SOC degenerates to conventional jaws when N = 1. For SOC-based IMRT, rectangular aperture optimization (RAO) was performed to optimize the fluence maps using rectangular representation, producing fluence maps that can be directly converted into a set of deliverable rectangular apertures. In order to optimize the dose distribution and minimize the number of apertures used, the overall objective was formulated to incorporate an L2 penalty reflecting the difference between the prescription and the projected doses, and an L1 sparsity regularization term to encourage a low number of nonzero rectangular basis coefficients. The optimization problem was solved using the Chambolle-Pock algorithm, a first-order primal-dual algorithm. Performance of RAO was compared to conventional two-step IMRT optimization including fluence map optimization and direct stratification for multileaf collimator (MLC) segmentation (DMS) using the same number of segments. For the RAO plans, segment travel time for SOC delivery was evaluated for the N = 1, N = 2, and N = 4 SOC designs to characterize the improvement in delivery efficiency as a function of N. Comparable PTV dose homogeneity and coverage were observed between the RAO and the DMS plans. The RAO plans were slightly superior to the DMS plans in sparing critical structures. On average, the maximum and mean critical organ doses were reduced by 1.94% and 1.44% of the prescription dose. The average number of delivery segments was 12.68 segments per beam for both the RAO and DMS plans. The N = 2 and N = 4 SOC designs were, on average, 1.56 and 1.80 times more efficient than the N = 1 SOC design to deliver. The mean aperture size produced by the RAO plans was 3.9 times larger than that of the DMS plans. The DAO and dose domain optimization approach enabled high quality IMRT plans using a low-complexity collimator setup. The dosimetric quality is comparable or slightly superior to conventional MLC-based IMRT plans using the same number of delivery segments. The SOC IMRT delivery efficiency can be significantly improved by increasing the leaf numbers, but the number is still significantly lower than the number of leaves in a typical MLC.

  2. TU-AB-202-11: Tumor Segmentation by Fusion of Multi-Tracer PET Images Using Copula Based Statistical Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapuyade-Lahorgue, J; Ruan, S; Li, H

    Purpose: Multi-tracer PET imaging is getting more attention in radiotherapy by providing additional tumor volume information such as glucose and oxygenation. However, automatic PET-based tumor segmentation is still a very challenging problem. We propose a statistical fusion approach to joint segment the sub-area of tumors from the two tracers FDG and FMISO PET images. Methods: Non-standardized Gamma distributions are convenient to model intensity distributions in PET. As a serious correlation exists in multi-tracer PET images, we proposed a new fusion method based on copula which is capable to represent dependency between different tracers. The Hidden Markov Field (HMF) model ismore » used to represent spatial relationship between PET image voxels and statistical dynamics of intensities for each modality. Real PET images of five patients with FDG and FMISO are used to evaluate quantitatively and qualitatively our method. A comparison between individual and multi-tracer segmentations was conducted to show advantages of the proposed fusion method. Results: The segmentation results show that fusion with Gaussian copula can receive high Dice coefficient of 0.84 compared to that of 0.54 and 0.3 of monomodal segmentation results based on individual segmentation of FDG and FMISO PET images. In addition, high correlation coefficients (0.75 to 0.91) for the Gaussian copula for all five testing patients indicates the dependency between tumor regions in the multi-tracer PET images. Conclusion: This study shows that using multi-tracer PET imaging can efficiently improve the segmentation of tumor region where hypoxia and glucidic consumption are present at the same time. Introduction of copulas for modeling the dependency between two tracers can simultaneously take into account information from both tracers and deal with two pathological phenomena. Future work will be to consider other families of copula such as spherical and archimedian copulas, and to eliminate partial volume effect by considering dependency between neighboring voxels.« less

  3. The Contribution of Segmental and Suprasegmental Phonology to Reading Comprehension

    PubMed Central

    Veenendaal, Nathalie J.; Groen, Margriet A.; Verhoeven, Ludo

    2016-01-01

    The aim of the present study was to examine the relation between decoding and segmental and suprasegmental phonology, and their contribution to reading comprehension, in the upper primary grades. Following a longitudinal design, the performance of 99 Dutch primary school children on phonological awareness (segmental phonology) and text reading prosody (suprasegmental phonology) in fourth-grade and fifth-grade, and reading comprehension in sixth-grade were examined. In addition, decoding efficiency as a general assessment of reading was examined. Structural path modeling firstly showed that the relation between decoding efficiency and both measures of phonology from fourth- to fifth grade was unidirectional. Secondly, the relation between decoding in fourth- and fifth-grade and reading comprehension in sixth-grade became indirect when segmental and suprasegmental phonology were added to the model. Both factors independently exerted influence on later reading comprehension. This leads to the conclusion that not only segmental, but also suprasegmental phonology, contributes substantially to children's reading development. PMID:27551159

  4. Joint 3-D vessel segmentation and centerline extraction using oblique Hough forests with steerable filters.

    PubMed

    Schneider, Matthias; Hirsch, Sven; Weber, Bruno; Székely, Gábor; Menze, Bjoern H

    2015-01-01

    We propose a novel framework for joint 3-D vessel segmentation and centerline extraction. The approach is based on multivariate Hough voting and oblique random forests (RFs) that we learn from noisy annotations. It relies on steerable filters for the efficient computation of local image features at different scales and orientations. We validate both the segmentation performance and the centerline accuracy of our approach both on synthetic vascular data and four 3-D imaging datasets of the rat visual cortex at 700 nm resolution. First, we evaluate the most important structural components of our approach: (1) Orthogonal subspace filtering in comparison to steerable filters that show, qualitatively, similarities to the eigenspace filters learned from local image patches. (2) Standard RF against oblique RF. Second, we compare the overall approach to different state-of-the-art methods for (1) vessel segmentation based on optimally oriented flux (OOF) and the eigenstructure of the Hessian, and (2) centerline extraction based on homotopic skeletonization and geodesic path tracing. Our experiments reveal the benefit of steerable over eigenspace filters as well as the advantage of oblique split directions over univariate orthogonal splits. We further show that the learning-based approach outperforms different state-of-the-art methods and proves highly accurate and robust with regard to both vessel segmentation and centerline extraction in spite of the high level of label noise in the training data. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Scope and Limitations of Fmoc Chemistry SPPS-Based Approaches to the Total Synthesis of Insulin Lispro via Ester Insulin.

    PubMed

    Dhayalan, Balamurugan; Mandal, Kalyaneswar; Rege, Nischay; Weiss, Michael A; Eitel, Simon H; Meier, Thomas; Schoenleber, Ralph O; Kent, Stephen B H

    2017-01-31

    We have systematically explored three approaches based on 9-fluorenylmethoxycarbonyl (Fmoc) chemistry solid phase peptide synthesis (SPPS) for the total chemical synthesis of the key depsipeptide intermediate for the efficient total chemical synthesis of insulin. The approaches used were: stepwise Fmoc chemistry SPPS; the "hybrid method", in which maximally protected peptide segments made by Fmoc chemistry SPPS are condensed in solution; and, native chemical ligation using peptide-thioester segments generated by Fmoc chemistry SPPS. A key building block in all three approaches was a Glu[O-β-(Thr)] ester-linked dipeptide equipped with a set of orthogonal protecting groups compatible with Fmoc chemistry SPPS. The most effective method for the preparation of the 51 residue ester-linked polypeptide chain of ester insulin was the use of unprotected peptide-thioester segments, prepared from peptide-hydrazides synthesized by Fmoc chemistry SPPS, and condensed by native chemical ligation. High-resolution X-ray crystallography confirmed the disulfide pairings and three-dimensional structure of synthetic insulin lispro prepared from ester insulin lispro by this route. Further optimization of these pilot studies could yield an efficient total chemical synthesis of insulin lispro (Humalog) based on peptide synthesis by Fmoc chemistry SPPS. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Unsupervised tattoo segmentation combining bottom-up and top-down cues

    NASA Astrophysics Data System (ADS)

    Allen, Josef D.; Zhao, Nan; Yuan, Jiangbo; Liu, Xiuwen

    2011-06-01

    Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for finding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a figureground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is efficient and suitable for further tattoo classification and retrieval purpose.

  7. Brain tumor image segmentation using kernel dictionary learning.

    PubMed

    Jeon Lee; Seung-Jun Kim; Rong Chen; Herskovits, Edward H

    2015-08-01

    Automated brain tumor image segmentation with high accuracy and reproducibility holds a big potential to enhance the current clinical practice. Dictionary learning (DL) techniques have been applied successfully to various image processing tasks recently. In this work, kernel extensions of the DL approach are adopted. Both reconstructive and discriminative versions of the kernel DL technique are considered, which can efficiently incorporate multi-modal nonlinear feature mappings based on the kernel trick. Our novel discriminative kernel DL formulation allows joint learning of a task-driven kernel-based dictionary and a linear classifier using a K-SVD-type algorithm. The proposed approaches were tested using real brain magnetic resonance (MR) images of patients with high-grade glioma. The obtained preliminary performances are competitive with the state of the art. The discriminative kernel DL approach is seen to reduce computational burden without much sacrifice in performance.

  8. Automatic segmentation of 4D cardiac MR images for extraction of ventricular chambers using a spatio-temporal approach

    NASA Astrophysics Data System (ADS)

    Atehortúa, Angélica; Zuluaga, Maria A.; Ourselin, Sébastien; Giraldo, Diana; Romero, Eduardo

    2016-03-01

    An accurate ventricular function quantification is important to support evaluation, diagnosis and prognosis of several cardiac pathologies. However, expert heart delineation, specifically for the right ventricle, is a time consuming task with high inter-and-intra observer variability. A fully automatic 3D+time heart segmentation framework is herein proposed for short-axis-cardiac MRI sequences. This approach estimates the heart using exclusively information from the sequence itself without tuning any parameters. The proposed framework uses a coarse-to-fine approach, which starts by localizing the heart via spatio-temporal analysis, followed by a segmentation of the basal heart that is then propagated to the apex by using a non-rigid-registration strategy. The obtained volume is then refined by estimating the ventricular muscle by locally searching a prior endocardium- pericardium intensity pattern. The proposed framework was applied to 48 patients datasets supplied by the organizers of the MICCAI 2012 Right Ventricle segmentation challenge. Results show the robustness, efficiency and competitiveness of the proposed method both in terms of accuracy and computational load.

  9. Efficient free-form surface representation with application in orthodontics

    NASA Astrophysics Data System (ADS)

    Yamany, Sameh M.; El-Bialy, Ahmed M.

    1999-03-01

    Orthodontics is the branch of dentistry concerned with the study of growth of the craniofacial complex. The detection and correction of malocclusion and other dental abnormalities is one of the most important and critical phases of orthodontic diagnosis. This paper introduces a system that can assist in automatic orthodontics diagnosis. The system can be used to classify skeletal and dental malocclusion from a limited number of measurements. This system is not intended to deal with several cases but is aimed at cases more likely to be encountered in epidemiological studies. Prior to the measurement of the orthodontics parameters, the position of the teeth in the jaw model must be detected. A new free-form surface representation is adopted for the efficient and accurate segmentation and separation of teeth from a scanned jaw model. THe new representation encodes the curvature and surface normal information into a 2D image. Image segmentation tools are then sued to extract structures of high/low curvature. By iteratively removing these structures, individual teeth surfaces are obtained.

  10. Hippocampal Structure and Human Cognition: Key Role of Spatial Processing and Evidence Supporting the Efficiency Hypothesis in Females

    ERIC Educational Resources Information Center

    Colom, Roberto; Stein, Jason L.; Rajagopalan, Priya; Martinez, Kenia; Hermel, David; Wang, Yalin; Alvarez-Linera, Juan; Burgaleta, Miguel; Quiroga, Ma. Angeles; Shih, Pei Chun; Thompson, Paul M.

    2013-01-01

    Here we apply a method for automated segmentation of the hippocampus in 3D high-resolution structural brain MRI scans. One hundred and four healthy young adults completed twenty one tasks measuring abstract, verbal, and spatial intelligence, along with working memory, executive control, attention, and processing speed. After permutation tests…

  11. SIP Shear Walls: Cyclic Performance of High-Aspect-Ratio Segments and Perforated Walls

    Treesearch

    Vladimir Kochkin; Douglas R. Rammer; Kevin Kauffman; Thomas Wiliamson; Robert J. Ross

    2015-01-01

    Increasing stringency of energy codes and the growing market demand for more energy efficient buildings gives structural insulated panel (SIP) construction an opportunity to increase its use in commercial and residential buildings. However, shear wall aspect ratio limitations and lack of knowledge on how to design SIPs with window and door openings are barriers to the...

  12. Live-Cell Imaging of Phagosome Motility in Primary Mouse RPE Cells.

    PubMed

    Hazim, Roni; Jiang, Mei; Esteve-Rudd, Julian; Diemer, Tanja; Lopes, Vanda S; Williams, David S

    2016-01-01

    The retinal pigment epithelium (RPE) is a post-mitotic epithelial monolayer situated between the light-sensitive photoreceptors and the choriocapillaris. Given its vital functions for healthy vision, the RPE is a primary target for insults that result in blinding diseases, including age-related macular degeneration (AMD). One such function is the phagocytosis and digestion of shed photoreceptor outer segments. In the present study, we examined the process of trafficking of outer segment disk membranes in live cultures of primary mouse RPE, using high speed spinning disk confocal microscopy. This approach has enabled us to track phagosomes, and determine parameters of their motility, which are important for their efficient degradation.

  13. Joint inversion for transponder localization and sound-speed profile temporal variation in high-precision acoustic surveys.

    PubMed

    Li, Zhao; Dosso, Stan E; Sun, Dajun

    2016-07-01

    This letter develops a Bayesian inversion for localizing underwater acoustic transponders using a surface ship which compensates for sound-speed profile (SSP) temporal variation during the survey. The method is based on dividing observed acoustic travel-time data into time segments and including depth-independent SSP variations for each segment as additional unknown parameters to approximate the SSP temporal variation. SSP variations are estimated jointly with transponder locations, rather than calculated separately as in existing two-step inversions. Simulation and sea-trial results show this localization/SSP joint inversion performs better than two-step inversion in terms of localization accuracy, agreement with measured SSP variations, and computational efficiency.

  14. Global Contrast Based Salient Region Detection.

    PubMed

    Cheng, Ming-Ming; Mitra, Niloy J; Huang, Xiaolei; Torr, Philip H S; Hu, Shi-Min

    2015-03-01

    Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.

  15. Image Segmentation, Registration, Compression, and Matching

    NASA Technical Reports Server (NTRS)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity/topology components of the generated models. The highly efficient triangular mesh compression compacts the connectivity information at the rate of 1.5-4 bits per vertex (on average for triangle meshes), while reducing the 3D geometry by 40-50 percent. Finally, taking into consideration the characteristics of 3D terrain data, and using the innovative, regularized binary decomposition mesh modeling, a multistage, pattern-drive modeling, and compression technique has been developed to provide an effective framework for compressing digital elevation model (DEM) surfaces, high-resolution aerial imagery, and other types of NASA data.

  16. The hyperion particle-γ detector array

    DOE PAGES

    Hughes, R. O.; Burke, J. T.; Casperson, R. J.; ...

    2017-03-08

    Hyperion is a new high-efficiency charged-particle γ-ray detector array which consists of a segmented silicon telescope for charged-particle detection and up to fourteen high-purity germanium clover detectors for the detection of coincident γ rays. The array will be used in nuclear physics measurements and Stockpile Stewardship studies and replaces the STARLiTeR array. In conclusion, this article discusses the features of the array and presents data collected with the array in the commissioning experiment.

  17. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    PubMed

    Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well.

  18. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem

    PubMed Central

    Wang, Jun Yi; Ngo, Michael M.; Hessl, David; Hagerman, Randi J.; Rivera, Susan M.

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer’s segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well. PMID:27213683

  19. Semiautomated segmentation of head and neck cancers in 18F-FDG PET scans: A just-enough-interaction approach.

    PubMed

    Beichel, Reinhard R; Van Tol, Markus; Ulrich, Ethan J; Bauer, Christian; Chang, Tangel; Plichta, Kristin A; Smith, Brian J; Sunderland, John J; Graham, Michael M; Sonka, Milan; Buatti, John M

    2016-06-01

    The purpose of this work was to develop, validate, and compare a highly computer-aided method for the segmentation of hot lesions in head and neck 18F-FDG PET scans. A semiautomated segmentation method was developed, which transforms the segmentation problem into a graph-based optimization problem. For this purpose, a graph structure around a user-provided approximate lesion centerpoint is constructed and a suitable cost function is derived based on local image statistics. To handle frequently occurring situations that are ambiguous (e.g., lesions adjacent to each other versus lesion with inhomogeneous uptake), several segmentation modes are introduced that adapt the behavior of the base algorithm accordingly. In addition, the authors present approaches for the efficient interactive local and global refinement of initial segmentations that are based on the "just-enough-interaction" principle. For method validation, 60 PET/CT scans from 59 different subjects with 230 head and neck lesions were utilized. All patients had squamous cell carcinoma of the head and neck. A detailed comparison with the current clinically relevant standard manual segmentation approach was performed based on 2760 segmentations produced by three experts. Segmentation accuracy measured by the Dice coefficient of the proposed semiautomated and standard manual segmentation approach was 0.766 and 0.764, respectively. This difference was not statistically significant (p = 0.2145). However, the intra- and interoperator standard deviations were significantly lower for the semiautomated method. In addition, the proposed method was found to be significantly faster and resulted in significantly higher intra- and interoperator segmentation agreement when compared to the manual segmentation approach. Lack of consistency in tumor definition is a critical barrier for radiation treatment targeting as well as for response assessment in clinical trials and in clinical oncology decision-making. The properties of the authors approach make it well suited for applications in image-guided radiation oncology, response assessment, or treatment outcome prediction.

  20. Semiautomated segmentation of head and neck cancers in 18F-FDG PET scans: A just-enough-interaction approach

    PubMed Central

    Beichel, Reinhard R.; Van Tol, Markus; Ulrich, Ethan J.; Bauer, Christian; Chang, Tangel; Plichta, Kristin A.; Smith, Brian J.; Sunderland, John J.; Graham, Michael M.; Sonka, Milan; Buatti, John M.

    2016-01-01

    Purpose: The purpose of this work was to develop, validate, and compare a highly computer-aided method for the segmentation of hot lesions in head and neck 18F-FDG PET scans. Methods: A semiautomated segmentation method was developed, which transforms the segmentation problem into a graph-based optimization problem. For this purpose, a graph structure around a user-provided approximate lesion centerpoint is constructed and a suitable cost function is derived based on local image statistics. To handle frequently occurring situations that are ambiguous (e.g., lesions adjacent to each other versus lesion with inhomogeneous uptake), several segmentation modes are introduced that adapt the behavior of the base algorithm accordingly. In addition, the authors present approaches for the efficient interactive local and global refinement of initial segmentations that are based on the “just-enough-interaction” principle. For method validation, 60 PET/CT scans from 59 different subjects with 230 head and neck lesions were utilized. All patients had squamous cell carcinoma of the head and neck. A detailed comparison with the current clinically relevant standard manual segmentation approach was performed based on 2760 segmentations produced by three experts. Results: Segmentation accuracy measured by the Dice coefficient of the proposed semiautomated and standard manual segmentation approach was 0.766 and 0.764, respectively. This difference was not statistically significant (p = 0.2145). However, the intra- and interoperator standard deviations were significantly lower for the semiautomated method. In addition, the proposed method was found to be significantly faster and resulted in significantly higher intra- and interoperator segmentation agreement when compared to the manual segmentation approach. Conclusions: Lack of consistency in tumor definition is a critical barrier for radiation treatment targeting as well as for response assessment in clinical trials and in clinical oncology decision-making. The properties of the authors approach make it well suited for applications in image-guided radiation oncology, response assessment, or treatment outcome prediction. PMID:27277044

  1. Semiautomated segmentation of head and neck cancers in 18F-FDG PET scans: A just-enough-interaction approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beichel, Reinhard R., E-mail: reinhard-beichel@uiowa.edu; Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242; Department of Internal Medicine, University of Iowa, Iowa City, Iowa 52242

    Purpose: The purpose of this work was to develop, validate, and compare a highly computer-aided method for the segmentation of hot lesions in head and neck 18F-FDG PET scans. Methods: A semiautomated segmentation method was developed, which transforms the segmentation problem into a graph-based optimization problem. For this purpose, a graph structure around a user-provided approximate lesion centerpoint is constructed and a suitable cost function is derived based on local image statistics. To handle frequently occurring situations that are ambiguous (e.g., lesions adjacent to each other versus lesion with inhomogeneous uptake), several segmentation modes are introduced that adapt the behaviormore » of the base algorithm accordingly. In addition, the authors present approaches for the efficient interactive local and global refinement of initial segmentations that are based on the “just-enough-interaction” principle. For method validation, 60 PET/CT scans from 59 different subjects with 230 head and neck lesions were utilized. All patients had squamous cell carcinoma of the head and neck. A detailed comparison with the current clinically relevant standard manual segmentation approach was performed based on 2760 segmentations produced by three experts. Results: Segmentation accuracy measured by the Dice coefficient of the proposed semiautomated and standard manual segmentation approach was 0.766 and 0.764, respectively. This difference was not statistically significant (p = 0.2145). However, the intra- and interoperator standard deviations were significantly lower for the semiautomated method. In addition, the proposed method was found to be significantly faster and resulted in significantly higher intra- and interoperator segmentation agreement when compared to the manual segmentation approach. Conclusions: Lack of consistency in tumor definition is a critical barrier for radiation treatment targeting as well as for response assessment in clinical trials and in clinical oncology decision-making. The properties of the authors approach make it well suited for applications in image-guided radiation oncology, response assessment, or treatment outcome prediction.« less

  2. Risk segmentation: goal or problem?

    PubMed

    Feldman, R; Dowd, B

    2000-07-01

    This paper traces the evolution of economists' views about risk segmentation in health insurance markets. Originally seen as a desirable goal, risk segmentation has come to be viewed as leading to abnormal profits, wasted resources, and inefficient limitations on coverage and services. We suggest that risk segmentation may be efficient if one takes an ex post view (i.e., after consumers' risks are known). From this perspective, managed care may be a much better method for achieving risk segmentation than limitations on coverage. The most serious objection to risk segmentation is the ex ante concern that it undermines long-term insurance contracts that would protect consumers against changes in lifetime risk.

  3. Iris segmentation using an edge detector based on fuzzy sets theory and cellular learning automata.

    PubMed

    Ghanizadeh, Afshin; Abarghouei, Amir Atapour; Sinaie, Saman; Saad, Puteh; Shamsuddin, Siti Mariyam

    2011-07-01

    Iris-based biometric systems identify individuals based on the characteristics of their iris, since they are proven to remain unique for a long time. An iris recognition system includes four phases, the most important of which is preprocessing in which the iris segmentation is performed. The accuracy of an iris biometric system critically depends on the segmentation system. In this paper, an iris segmentation system using edge detection techniques and Hough transforms is presented. The newly proposed edge detection system enhances the performance of the segmentation in a way that it performs much more efficiently than the other conventional iris segmentation methods.

  4. Automatic Organ Segmentation for CT Scans Based on Super-Pixel and Convolutional Neural Networks.

    PubMed

    Liu, Xiaoming; Guo, Shuxu; Yang, Bingtao; Ma, Shuzhi; Zhang, Huimao; Li, Jing; Sun, Changjian; Jin, Lanyi; Li, Xueyan; Yang, Qi; Fu, Yu

    2018-04-20

    Accurate segmentation of specific organ from computed tomography (CT) scans is a basic and crucial task for accurate diagnosis and treatment. To avoid time-consuming manual optimization and to help physicians distinguish diseases, an automatic organ segmentation framework is presented. The framework utilized convolution neural networks (CNN) to classify pixels. To reduce the redundant inputs, the simple linear iterative clustering (SLIC) of super-pixels and the support vector machine (SVM) classifier are introduced. To establish the perfect boundary of organs in one-pixel-level, the pixels need to be classified step-by-step. First, the SLIC is used to cut an image into grids and extract respective digital signatures. Next, the signature is classified by the SVM, and the rough edges are acquired. Finally, a precise boundary is obtained by the CNN, which is based on patches around each pixel-point. The framework is applied to abdominal CT scans of livers and high-resolution computed tomography (HRCT) scans of lungs. The experimental CT scans are derived from two public datasets (Sliver 07 and a Chinese local dataset). Experimental results show that the proposed method can precisely and efficiently detect the organs. This method consumes 38 s/slice for liver segmentation. The Dice coefficient of the liver segmentation results reaches to 97.43%. For lung segmentation, the Dice coefficient is 97.93%. This finding demonstrates that the proposed framework is a favorable method for lung segmentation of HRCT scans.

  5. Early detection of lung cancer from CT images: nodule segmentation and classification using deep learning

    NASA Astrophysics Data System (ADS)

    Sharma, Manu; Bhatt, Jignesh S.; Joshi, Manjunath V.

    2018-04-01

    Lung cancer is one of the most abundant causes of the cancerous deaths worldwide. It has low survival rate mainly due to the late diagnosis. With the hardware advancements in computed tomography (CT) technology, it is now possible to capture the high resolution images of lung region. However, it needs to be augmented by efficient algorithms to detect the lung cancer in the earlier stages using the acquired CT images. To this end, we propose a two-step algorithm for early detection of lung cancer. Given the CT image, we first extract the patch from the center location of the nodule and segment the lung nodule region. We propose to use Otsu method followed by morphological operations for the segmentation. This step enables accurate segmentation due to the use of data-driven threshold. Unlike other methods, we perform the segmentation without using the complete contour information of the nodule. In the second step, a deep convolutional neural network (CNN) is used for the better classification (malignant or benign) of the nodule present in the segmented patch. Accurate segmentation of even a tiny nodule followed by better classification using deep CNN enables the early detection of lung cancer. Experiments have been conducted using 6306 CT images of LIDC-IDRI database. We achieved the test accuracy of 84.13%, with the sensitivity and specificity of 91.69% and 73.16%, respectively, clearly outperforming the state-of-the-art algorithms.

  6. Parallel fuzzy connected image segmentation on GPU

    PubMed Central

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.

    2011-01-01

    Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA’s compute unified device Architecture (cuda) platform for segmenting medical image data sets. Methods: In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as cuda kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Results: Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. Conclusions: The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set. PMID:21859037

  7. Parallel fuzzy connected image segmentation on GPU.

    PubMed

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K; Miller, Robert W

    2011-07-01

    Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA's compute unified device Architecture (CUDA) platform for segmenting medical image data sets. In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as CUDA kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set.

  8. Visual search performance in the autism spectrum II: the radial frequency search task with additional segmentation cues.

    PubMed

    Almeida, Renita A; Dickinson, J Edwin; Maybery, Murray T; Badcock, Johanna C; Badcock, David R

    2010-12-01

    The Embedded Figures Test (EFT) requires detecting a shape within a complex background and individuals with autism or high Autism-spectrum Quotient (AQ) scores are faster and more accurate on this task than controls. This research aimed to uncover the visual processes producing this difference. Previously we developed a search task using radial frequency (RF) patterns with controllable amounts of target/distracter overlap on which high AQ participants showed more efficient search than low AQ observers. The current study extended the design of this search task by adding two lines which traverse the display on random paths sometimes intersecting target/distracters, other times passing between them. As with the EFT, these lines segment and group the display in ways that are task irrelevant. We tested two new groups of observers and found that while RF search was slowed by the addition of segmenting lines for both groups, the high AQ group retained a consistent search advantage (reflected in a shallower gradient for reaction time as a function of set size) over the low AQ group. Further, the high AQ group were significantly faster and more accurate on the EFT compared to the low AQ group. That is, the results from the present RF search task demonstrate that segmentation and grouping created by intersecting lines does not further differentiate the groups and is therefore unlikely to be a critical factor underlying the EFT performance difference. However, once again, we found that superior EFT performance was associated with shallower gradients on the RF search task. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. Development of High Resolution Mirrors and Cd-Zn-Te Detectors for Hard X-ray Astronomy

    NASA Technical Reports Server (NTRS)

    Ramsey, Brian D.; Speegle, Chet O.; Gaskin, Jessica; Sharma, Dharma; Engelhaupt, Darell; Six, N. Frank (Technical Monitor)

    2002-01-01

    We describe the fabrication and implementation of a high-resolution conical, grazing- incidence, hard X-ray (20-70 keV) telescope. When flown aboard stratospheric balloons, these mirrors are used to image cosmic sources such as supernovae, neutron stars, and quasars. The fabrication process involves generating super-polished mandrels, mirror shell electroforming, and mirror testing. The cylindrical mandrels consist of two conical segments; each segment is approximately 305 mm long. These mandrels are first, precision ground to within approx. 1.0 micron straightness along each conical segment and then lapped and polished to less than 0.5 micron straightness. Each mandrel segment is the super-polished to an average surface roughness of approx. 3.25 angstrom rms. By mirror shell replication, this combination of good figure and low surface roughness has enabled us to achieve 15 arcsec, confirmed by X-ray measurements in the Marshall Space Flight Center 102 meter test facility. To image the focused X-rays requires a focal plane detector with appropriate spatial resolution. For 15 arcsec optics of 6 meter focal length, this resolution must be around 200 microns. In addition, the detector must have a high efficiency, relatively high energy resolution, and low background. We are currently developing Cadmium-Zinc-Telluride fine-pixel detectors for this purpose. The detectors under study consist of a 16x16 pixel array with a pixel pitch of 300 microns and are 1 mm and 2 mm thick. At 60 keV, the measured energy resolution is around 2%.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milostan, Catharina; Levin, Todd; Muehleisen, Ralph T.

    Many electric utilities operate energy efficiency incentive programs that encourage increased dissemination and use of energy-efficient (EE) products in their service territories. The programs can be segmented into three broad categories—downstream incentive programs target product end users, midstream programs target product distributors, and upstream programs target product manufacturers. Traditional downstream programs have had difficulty engaging Small Business/Small Portfolio (SBSP) audiences, and an opportunity exists to expand Commercial Midstream Incentive Programs (CMIPs) to reach this market segment instead.

  11. The E3 combustors: Status and challenges. [energy efficient turbofan engines

    NASA Technical Reports Server (NTRS)

    Sokolowski, D. E.; Rohde, J. E.

    1981-01-01

    The design, fabrication, and initial testing of energy efficient engine combustors, developed for the next generation of turbofan engines for commercial aircraft, are described. The combustor designs utilize an annular configuration with two zone combustion for low emissions, advanced liners for improved durability, and short, curved-wall, dump prediffusers for compactness. Advanced cooling techniques and segmented construction characterize the advanced liners. Linear segments are made from castable, turbine-type materials.

  12. Segmentation of brain structures in presence of a space-occupying lesion.

    PubMed

    Pollo, Claudio; Cuadra, Meritxell Bach; Cuisenaire, Olivier; Villemure, Jean-Guy; Thiran, Jean-Philippe

    2005-02-15

    Brain deformations induced by space-occupying lesions may result in unpredictable position and shape of functionally important brain structures. The aim of this study is to propose a method for segmentation of brain structures by deformation of a segmented brain atlas in presence of a space-occupying lesion. Our approach is based on an a priori model of lesion growth (MLG) that assumes radial expansion from a seeding point and involves three steps: first, an affine registration bringing the atlas and the patient into global correspondence; then, the seeding of a synthetic tumor into the brain atlas providing a template for the lesion; finally, the deformation of the seeded atlas, combining a method derived from optical flow principles and a model of lesion growth. The method was applied on two meningiomas inducing a pure displacement of the underlying brain structures, and segmentation accuracy of ventricles and basal ganglia was assessed. Results show that the segmented structures were consistent with the patient's anatomy and that the deformation accuracy of surrounding brain structures was highly dependent on the accurate placement of the tumor seeding point. Further improvements of the method will optimize the segmentation accuracy. Visualization of brain structures provides useful information for therapeutic consideration of space-occupying lesions, including surgical, radiosurgical, and radiotherapeutic planning, in order to increase treatment efficiency and prevent neurological damage.

  13. Segmentation and Quantitative Analysis of Apoptosis of Chinese Hamster Ovary Cells from Fluorescence Microscopy Images.

    PubMed

    Du, Yuncheng; Budman, Hector M; Duever, Thomas A

    2017-06-01

    Accurate and fast quantitative analysis of living cells from fluorescence microscopy images is useful for evaluating experimental outcomes and cell culture protocols. An algorithm is developed in this work to automatically segment and distinguish apoptotic cells from normal cells. The algorithm involves three steps consisting of two segmentation steps and a classification step. The segmentation steps are: (i) a coarse segmentation, combining a range filter with a marching square method, is used as a prefiltering step to provide the approximate positions of cells within a two-dimensional matrix used to store cells' images and the count of the number of cells for a given image; and (ii) a fine segmentation step using the Active Contours Without Edges method is applied to the boundaries of cells identified in the coarse segmentation step. Although this basic two-step approach provides accurate edges when the cells in a given image are sparsely distributed, the occurrence of clusters of cells in high cell density samples requires further processing. Hence, a novel algorithm for clusters is developed to identify the edges of cells within clusters and to approximate their morphological features. Based on the segmentation results, a support vector machine classifier that uses three morphological features: the mean value of pixel intensities in the cellular regions, the variance of pixel intensities in the vicinity of cell boundaries, and the lengths of the boundaries, is developed for distinguishing apoptotic cells from normal cells. The algorithm is shown to be efficient in terms of computational time, quantitative analysis, and differentiation accuracy, as compared with the use of the active contours method without the proposed preliminary coarse segmentation step.

  14. Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis

    NASA Astrophysics Data System (ADS)

    Che, E.; Olsen, M. J.

    2017-09-01

    Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.

  15. Quaternized adamantane-containing poly(aryl ether ketone) anion exchange membranes for vanadium redox flow battery applications

    NASA Astrophysics Data System (ADS)

    Zhang, Bengui; Zhang, Shouhai; Weng, Zhihuan; Wang, Guosheng; Zhang, Enlei; Yu, Ping; Chen, Xiaomeng; Wang, Xinwei

    2016-09-01

    Quaternized adamantane-containing poly(aryl ether ketone) anion exchange membranes (QADMPEK) are prepared and investigated for vanadium redox flow batteries (VRFB) application. The bulky, rigid and highly hydrophobic adamantane segment incorporated into the backbone of membrane material makes QADMPEK membranes have low water uptake and swelling ratio, and the as-prepared membranes display significantly lower permeability of vanadium ions than that of Nafion117 membrane. As a consequence, the VRFB cell with QADMPEK-3 membrane shows higher coulombic efficiency (99.4%) and energy efficiency (84.0%) than those for Nafion117 membrane (95.2% and 80.5%, respectively) at the current density of 80 mA cm-2. Furthermore, at a much higher current density of 140 mA cm-2, QADMPEK membrane still exhibits better coulombic efficiency and energy efficiency than Nafion117 membrane (coulombic efficiency 99.2% vs 96.5% and energy efficiency 76.0% vs 74.0%). Moreover, QADMPEK membranes show high stability in in-situ VRFB cycle test and ex-situ oxidation stability test. These results indicate that QADMPEK membranes are good candidates for VRFB applications.

  16. Quantification and Segmentation of Brain Tissues from MR Images: A Probabilistic Neural Network Approach

    PubMed Central

    Wang, Yue; Adalý, Tülay; Kung, Sun-Yuan; Szabo, Zsolt

    2007-01-01

    This paper presents a probabilistic neural network based technique for unsupervised quantification and segmentation of brain tissues from magnetic resonance images. It is shown that this problem can be solved by distribution learning and relaxation labeling, resulting in an efficient method that may be particularly useful in quantifying and segmenting abnormal brain tissues where the number of tissue types is unknown and the distributions of tissue types heavily overlap. The new technique uses suitable statistical models for both the pixel and context images and formulates the problem in terms of model-histogram fitting and global consistency labeling. The quantification is achieved by probabilistic self-organizing mixtures and the segmentation by a probabilistic constraint relaxation network. The experimental results show the efficient and robust performance of the new algorithm and that it outperforms the conventional classification based approaches. PMID:18172510

  17. A segmentation and point-matching enhanced efficient deformable image registration method for dose accumulation between HDR CT images

    NASA Astrophysics Data System (ADS)

    Zhen, Xin; Chen, Haibin; Yan, Hao; Zhou, Linghong; Mell, Loren K.; Yashar, Catheryn M.; Jiang, Steve; Jia, Xun; Gu, Xuejun; Cervino, Laura

    2015-04-01

    Deformable image registration (DIR) of fractional high-dose-rate (HDR) CT images is challenging due to the presence of applicators in the brachytherapy image. Point-to-point correspondence fails because of the undesired deformation vector fields (DVF) propagated from the applicator region (AR) to the surrounding tissues, which can potentially introduce significant DIR errors in dose mapping. This paper proposes a novel segmentation and point-matching enhanced efficient DIR (named SPEED) scheme to facilitate dose accumulation among HDR treatment fractions. In SPEED, a semi-automatic seed point generation approach is developed to obtain the incremented fore/background point sets to feed the random walks algorithm, which is used to segment and remove the AR, leaving empty AR cavities in the HDR CT images. A feature-based ‘thin-plate-spline robust point matching’ algorithm is then employed for AR cavity surface points matching. With the resulting mapping, a DVF defining on each voxel is estimated by B-spline approximation, which serves as the initial DVF for the subsequent Demons-based DIR between the AR-free HDR CT images. The calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative analysis and visual inspection of the DIR results indicate that SPEED can suppress the impact of applicator on DIR, and accurately register HDR CT images as well as deform and add interfractional HDR doses.

  18. A segmentation and point-matching enhanced efficient deformable image registration method for dose accumulation between HDR CT images.

    PubMed

    Zhen, Xin; Chen, Haibin; Yan, Hao; Zhou, Linghong; Mell, Loren K; Yashar, Catheryn M; Jiang, Steve; Jia, Xun; Gu, Xuejun; Cervino, Laura

    2015-04-07

    Deformable image registration (DIR) of fractional high-dose-rate (HDR) CT images is challenging due to the presence of applicators in the brachytherapy image. Point-to-point correspondence fails because of the undesired deformation vector fields (DVF) propagated from the applicator region (AR) to the surrounding tissues, which can potentially introduce significant DIR errors in dose mapping. This paper proposes a novel segmentation and point-matching enhanced efficient DIR (named SPEED) scheme to facilitate dose accumulation among HDR treatment fractions. In SPEED, a semi-automatic seed point generation approach is developed to obtain the incremented fore/background point sets to feed the random walks algorithm, which is used to segment and remove the AR, leaving empty AR cavities in the HDR CT images. A feature-based 'thin-plate-spline robust point matching' algorithm is then employed for AR cavity surface points matching. With the resulting mapping, a DVF defining on each voxel is estimated by B-spline approximation, which serves as the initial DVF for the subsequent Demons-based DIR between the AR-free HDR CT images. The calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative analysis and visual inspection of the DIR results indicate that SPEED can suppress the impact of applicator on DIR, and accurately register HDR CT images as well as deform and add interfractional HDR doses.

  19. [An object-oriented remote sensing image segmentation approach based on edge detection].

    PubMed

    Tan, Yu-Min; Huai, Jian-Zhu; Tang, Zhong-Shi

    2010-06-01

    Satellite sensor technology endorsed better discrimination of various landscape objects. Image segmentation approaches to extracting conceptual objects and patterns hence have been explored and a wide variety of such algorithms abound. To this end, in order to effectively utilize edge and topological information in high resolution remote sensing imagery, an object-oriented algorithm combining edge detection and region merging is proposed. Susan edge filter is firstly applied to the panchromatic band of Quickbird imagery with spatial resolution of 0.61 m to obtain the edge map. Thanks to the resulting edge map, a two-phrase region-based segmentation method operates on the fusion image from panchromatic and multispectral Quickbird images to get the final partition result. In the first phase, a quad tree grid consisting of squares with sides parallel to the image left and top borders agglomerates the square subsets recursively where the uniform measure is satisfied to derive image object primitives. Before the merger of the second phrase, the contextual and spatial information, (e. g., neighbor relationship, boundary coding) of the resulting squares are retrieved efficiently by means of the quad tree structure. Then a region merging operation is performed with those primitives, during which the criterion for region merging integrates edge map and region-based features. This approach has been tested on the QuickBird images of some site in Sanxia area and the result is compared with those of ENVI Zoom Definiens. In addition, quantitative evaluation of the quality of segmentation results is also presented. Experiment results demonstrate stable convergence and efficiency.

  20. Elimination of RF inhomogeneity effects in segmentation.

    PubMed

    Agus, Onur; Ozkan, Mehmed; Aydin, Kubilay

    2007-01-01

    There are various methods proposed for the segmentation and analysis of MR images. However the efficiency of these techniques is effected by various artifacts that occur in the imaging system. One of the most encountered problems is the intensity variation across an image. To overcome this problem different methods are used. In this paper we propose a method for the elimination of intensity artifacts in segmentation of MRI images. Inter imager variations are also minimized to produce the same tissue segmentation for the same patient. A well-known multivariate classification algorithm, maximum likelihood is employed to illustrate the enhancement in segmentation.

  1. Heart-Rate Variability During Deep Sleep in World-Class Alpine Skiers: A Time-Efficient Alternative to Morning Supine Measurements.

    PubMed

    Herzig, David; Testorelli, Moreno; Olstad, Daniela Schäfer; Erlacher, Daniel; Achermann, Peter; Eser, Prisca; Wilhelm, Matthias

    2017-05-01

    It is increasingly popular to use heart-rate variability (HRV) to tailor training for athletes. A time-efficient method is HRV assessment during deep sleep. To validate the selection of deep-sleep segments identified by RR intervals with simultaneous electroencephalography (EEG) recordings and to compare HRV parameters of these segments with those of standard morning supine measurements. In 11 world-class alpine skiers, RR intervals were monitored during 10 nights, and simultaneous EEGs were recorded during 2-4 nights. Deep sleep was determined from the HRV signal and verified by delta power from the EEG recordings. Four further segments were chosen for HRV determination, namely, a 4-h segment from midnight to 4 AM and three 5-min segments: 1 just before awakening, 1 after waking in supine position, and 1 in standing after orthostatic challenge. Training load was recorded every day. A total of 80 night and 68 morning measurements of 9 athletes were analyzed. Good correspondence between the phases selected by RR intervals vs those selected by EEG was found. Concerning root-mean-squared difference of successive RR intervals (RMSSD), a marker for parasympathetic activity, the best relationship with the morning supine measurement was found in deep sleep. HRV is a simple tool for approximating deep-sleep phases, and HRV measurement during deep sleep could provide a time-efficient alternative to HRV in supine position.

  2. Segment scheduling method for reducing 360° video streaming latency

    NASA Astrophysics Data System (ADS)

    Gudumasu, Srinivas; Asbun, Eduardo; He, Yong; Ye, Yan

    2017-09-01

    360° video is an emerging new format in the media industry enabled by the growing availability of virtual reality devices. It provides the viewer a new sense of presence and immersion. Compared to conventional rectilinear video (2D or 3D), 360° video poses a new and difficult set of engineering challenges on video processing and delivery. Enabling comfortable and immersive user experience requires very high video quality and very low latency, while the large video file size poses a challenge to delivering 360° video in a quality manner at scale. Conventionally, 360° video represented in equirectangular or other projection formats can be encoded as a single standards-compliant bitstream using existing video codecs such as H.264/AVC or H.265/HEVC. Such method usually needs very high bandwidth to provide an immersive user experience. While at the client side, much of such high bandwidth and the computational power used to decode the video are wasted because the user only watches a small portion (i.e., viewport) of the entire picture. Viewport dependent 360°video processing and delivery approaches spend more bandwidth on the viewport than on non-viewports and are therefore able to reduce the overall transmission bandwidth. This paper proposes a dual buffer segment scheduling algorithm for viewport adaptive streaming methods to reduce latency when switching between high quality viewports in 360° video streaming. The approach decouples the scheduling of viewport segments and non-viewport segments to ensure the viewport segment requested matches the latest user head orientation. A base layer buffer stores all lower quality segments, and a viewport buffer stores high quality viewport segments corresponding to the most recent viewer's head orientation. The scheduling scheme determines viewport requesting time based on the buffer status and the head orientation. This paper also discusses how to deploy the proposed scheduling design for various viewport adaptive video streaming methods. The proposed dual buffer segment scheduling method is implemented in an end-to-end tile based 360° viewports adaptive video streaming platform, where the entire 360° video is divided into a number of tiles, and each tile is independently encoded into multiple quality level representations. The client requests different quality level representations of each tile based on the viewer's head orientation and the available bandwidth, and then composes all tiles together for rendering. The simulation results verify that the proposed dual buffer segment scheduling algorithm reduces the viewport switch latency, and utilizes available bandwidth more efficiently. As a result, a more consistent immersive 360° video viewing experience can be presented to the user.

  3. Efficient multi-atlas abdominal segmentation on clinically acquired CT with SIMPLE context learning.

    PubMed

    Xu, Zhoubing; Burke, Ryan P; Lee, Christopher P; Baucom, Rebeccah B; Poulose, Benjamin K; Abramson, Richard G; Landman, Bennett A

    2015-08-01

    Abdominal segmentation on clinically acquired computed tomography (CT) has been a challenging problem given the inter-subject variance of human abdomens and complex 3-D relationships among organs. Multi-atlas segmentation (MAS) provides a potentially robust solution by leveraging label atlases via image registration and statistical fusion. We posit that the efficiency of atlas selection requires further exploration in the context of substantial registration errors. The selective and iterative method for performance level estimation (SIMPLE) method is a MAS technique integrating atlas selection and label fusion that has proven effective for prostate radiotherapy planning. Herein, we revisit atlas selection and fusion techniques for segmenting 12 abdominal structures using clinically acquired CT. Using a re-derived SIMPLE algorithm, we show that performance on multi-organ classification can be improved by accounting for exogenous information through Bayesian priors (so called context learning). These innovations are integrated with the joint label fusion (JLF) approach to reduce the impact of correlated errors among selected atlases for each organ, and a graph cut technique is used to regularize the combined segmentation. In a study of 100 subjects, the proposed method outperformed other comparable MAS approaches, including majority vote, SIMPLE, JLF, and the Wolz locally weighted vote technique. The proposed technique provides consistent improvement over state-of-the-art approaches (median improvement of 7.0% and 16.2% in DSC over JLF and Wolz, respectively) and moves toward efficient segmentation of large-scale clinically acquired CT data for biomarker screening, surgical navigation, and data mining. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. A Dynamic Graph Cuts Method with Integrated Multiple Feature Maps for Segmenting Kidneys in 2D Ultrasound Images.

    PubMed

    Zheng, Qiang; Warner, Steven; Tasian, Gregory; Fan, Yong

    2018-02-12

    Automatic segmentation of kidneys in ultrasound (US) images remains a challenging task because of high speckle noise, low contrast, and large appearance variations of kidneys in US images. Because texture features may improve the US image segmentation performance, we propose a novel graph cuts method to segment kidney in US images by integrating image intensity information and texture feature maps. We develop a new graph cuts-based method to segment kidney US images by integrating original image intensity information and texture feature maps extracted using Gabor filters. To handle large appearance variation within kidney images and improve computational efficiency, we build a graph of image pixels close to kidney boundary instead of building a graph of the whole image. To make the kidney segmentation robust to weak boundaries, we adopt localized regional information to measure similarity between image pixels for computing edge weights to build the graph of image pixels. The localized graph is dynamically updated and the graph cuts-based segmentation iteratively progresses until convergence. Our method has been evaluated based on kidney US images of 85 subjects. The imaging data of 20 randomly selected subjects were used as training data to tune parameters of the image segmentation method, and the remaining data were used as testing data for validation. Experiment results demonstrated that the proposed method obtained promising segmentation results for bilateral kidneys (average Dice index = 0.9446, average mean distance = 2.2551, average specificity = 0.9971, average accuracy = 0.9919), better than other methods under comparison (P < .05, paired Wilcoxon rank sum tests). The proposed method achieved promising performance for segmenting kidneys in two-dimensional US images, better than segmentation methods built on any single channel of image information. This method will facilitate extraction of kidney characteristics that may predict important clinical outcomes such as progression of chronic kidney disease. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  5. Lung segmentation refinement based on optimal surface finding utilizing a hybrid desktop/virtual reality user interface.

    PubMed

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation of 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54±0.75 mm prior to refinement vs. 1.11±0.43 mm post-refinement, p≪0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction was about 2 min per case. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Optimizing Segmental Bone Regeneration Using Functionally Graded Scaffolds

    DTIC Science & Technology

    2012-10-01

    Such a model system would allow more realistic assessment of different clinical treatment options in a rapid, cost -efficient, and safe man- ner...along with MichealiseMenten kinetics. Genetic algorithm [37] was adopted to minimize the cost function in Equation (14). Fig. 3 shows that simulated...associated with autografts, such as high cost , requirement of additional surgeries, donor-site morbidity, and limiting autographs for the treatment

  7. Using data mining to segment healthcare markets from patients' preference perspectives.

    PubMed

    Liu, Sandra S; Chen, Jie

    2009-01-01

    This paper aims to provide an example of how to use data mining techniques to identify patient segments regarding preferences for healthcare attributes and their demographic characteristics. Data were derived from a number of individuals who received in-patient care at a health network in 2006. Data mining and conventional hierarchical clustering with average linkage and Pearson correlation procedures are employed and compared to show how each procedure best determines segmentation variables. Data mining tools identified three differentiable segments by means of cluster analysis. These three clusters have significantly different demographic profiles. The study reveals, when compared with traditional statistical methods, that data mining provides an efficient and effective tool for market segmentation. When there are numerous cluster variables involved, researchers and practitioners need to incorporate factor analysis for reducing variables to clearly and meaningfully understand clusters. Interests and applications in data mining are increasing in many businesses. However, this technology is seldom applied to healthcare customer experience management. The paper shows that efficient and effective application of data mining methods can aid the understanding of patient healthcare preferences.

  8. A Q-Ising model application for linear-time image segmentation

    NASA Astrophysics Data System (ADS)

    Bentrem, Frank W.

    2010-10-01

    A computational method is presented which efficiently segments digital grayscale images by directly applying the Q-state Ising (or Potts) model. Since the Potts model was first proposed in 1952, physicists have studied lattice models to gain deep insights into magnetism and other disordered systems. For some time, researchers have realized that digital images may be modeled in much the same way as these physical systems ( i.e., as a square lattice of numerical values). A major drawback in using Potts model methods for image segmentation is that, with conventional methods, it processes in exponential time. Advances have been made via certain approximations to reduce the segmentation process to power-law time. However, in many applications (such as for sonar imagery), real-time processing requires much greater efficiency. This article contains a description of an energy minimization technique that applies four Potts (Q-Ising) models directly to the image and processes in linear time. The result is analogous to partitioning the system into regions of four classes of magnetism. This direct Potts segmentation technique is demonstrated on photographic, medical, and acoustic images.

  9. Energy-efficient rings mechanism for greening multisegment fiber-wireless access networks

    NASA Astrophysics Data System (ADS)

    Gong, Xiaoxue; Guo, Lei; Hou, Weigang; Zhang, Lincong

    2013-07-01

    Through integrating advantages of optical and wireless communications, the Fiber-Wireless (FiWi) has become a promising solution for the "last-mile" broadband access. In particular, greening FiWi has attained extensive attention, because the access network is a main energy contributor in the whole infrastructure. However, prior solutions of greening FiWi shut down or sleep unused/minimally used optical network units for a single segment, where we deploy only one optical linear terminal. We propose a green mechanism referred to as energy-efficient ring (EER) for multisegment FiWi access networks. We utilize an integer linear programming model and a generic algorithm to generate clusters, each having the shortest distance of fully connected segments of its own. Leveraging the backtracking method for each cluster, we then connect segments through fiber links, and the shortest distance fiber ring is constructed. Finally, we sleep low load segments and forward affected traffic to other active segments on the same fiber ring by our sleeping scheme. Experimental results show that our EER mechanism significantly reduces the energy consumption at the slightly additional cost of deploying fiber links.

  10. Segmented Gamma Scanner for Small Containers of Uranium Processing Waste- 12295

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, K.E.; Smith, S.K.; Gailey, S.

    2012-07-01

    The Segmented Gamma Scanner (SGS) is commonly utilized in the assay of 55-gallon drums containing radioactive waste. Successfully deployed calibration methods include measurement of vertical line source standards in representative matrices and mathematical efficiency calibrations. The SGS technique can also be utilized to assay smaller containers, such as those used for criticality safety in uranium processing facilities. For such an application, a Can SGS System is aptly suited for the identification and quantification of radionuclides present in fuel processing wastes. Additionally, since the significant presence of uranium lumping can confound even a simple 'pass/fail' measurement regimen, the high-resolution gamma spectroscopymore » allows for the use of lump-detection techniques. In this application a lump correction is not required, but the application of a differential peak approach is used to simply identify the presence of U-235 lumps. The Can SGS is similar to current drum SGSs, but differs in the methodology for vertical segmentation. In the current drum SGS, the drum is placed on a rotator at a fixed vertical position while the detector, collimator, and transmission source are moved vertically to effect vertical segmentation. For the Can SGS, segmentation is more efficiently done by raising and lowering the rotator platform upon which the small container is positioned. This also reduces the complexity of the system mechanism. The application of the Can SGS introduces new challenges to traditional calibration and verification approaches. In this paper, we revisit SGS calibration methodology in the context of smaller waste containers, and as applied to fuel processing wastes. Specifically, we discuss solutions to the challenges introduced by requiring source standards to fit within the confines of the small containers and the unavailability of high-enriched uranium source standards. We also discuss the implementation of a previously used technique for identifying the presence of uranium lumping. The SGS technique is a well-accepted NDA technique applicable to containers of almost any size. It assumes a homogenous matrix and activity distribution throughout the entire container; an assumption that is at odds with the detection of lumps within the assay item typical of uranium-processing waste. This fact, in addition to the difficultly in constructing small reference standards of uranium-bearing materials, required the methodology used for performing an efficiency curve calibration to be altered. The solution discussed in this paper is demonstrated to provide good results for both the segment activity and full container activity when measuring heterogeneous source distributions. The application of this approach will need to be based on process knowledge of the assay items, as biases can be introduced if used with homogenous, or nearly homogenous, activity distributions. The bias will need to be quantified for each combination of container geometry and SGS scanning settings. One recommended approach for using the heterogeneous calibration discussed here is to assay each item using a homogenous calibration initially. Review of the segment activities compared to the full container activity will signal the presence of a non-uniform activity distribution as the segment activity will be grossly disproportionate to the full container activity. Upon seeing this result, the assay should either be reanalyzed or repeated using the heterogeneous calibration. (authors)« less

  11. An Introduction to System-Level, Steady-State and Transient Modeling and Optimization of High-Power-Density Thermoelectric Generator Devices Made of Segmented Thermoelectric Elements

    NASA Astrophysics Data System (ADS)

    Crane, D. T.

    2011-05-01

    High-power-density, segmented, thermoelectric (TE) elements have been intimately integrated into heat exchangers, eliminating many of the loss mechanisms of conventional TE assemblies, including the ceramic electrical isolation layer. Numerical models comprising simultaneously solved, nonlinear, energy balance equations have been created to simulate these novel architectures. Both steady-state and transient models have been created in a MATLAB/Simulink environment. The models predict data from experiments in various configurations and applications over a broad range of temperature, flow, and current conditions for power produced, efficiency, and a variety of other important outputs. Using the validated models, devices and systems are optimized using advanced multiparameter optimization techniques. Devices optimized for particular steady-state operating conditions can then be dynamically simulated in a transient operating model. The transient model can simulate a variety of operating conditions including automotive and truck drive cycles.

  12. An Efficient Pipeline for Abdomen Segmentation in CT Images.

    PubMed

    Koyuncu, Hasan; Ceylan, Rahime; Sivri, Mesut; Erdogan, Hasan

    2018-04-01

    Computed tomography (CT) scans usually include some disadvantages due to the nature of the imaging procedure, and these handicaps prevent accurate abdomen segmentation. Discontinuous abdomen edges, bed section of CT, patient information, closeness between the edges of the abdomen and CT, poor contrast, and a narrow histogram can be regarded as the most important handicaps that occur in abdominal CT scans. Currently, one or more handicaps can arise and prevent technicians obtaining abdomen images through simple segmentation techniques. In other words, CT scans can include the bed section of CT, a patient's diagnostic information, low-quality abdomen edges, low-level contrast, and narrow histogram, all in one scan. These phenomena constitute a challenge, and an efficient pipeline that is unaffected by handicaps is required. In addition, analysis such as segmentation, feature selection, and classification has meaning for a real-time diagnosis system in cases where the abdomen section is directly used with a specific size. A statistical pipeline is designed in this study that is unaffected by the handicaps mentioned above. Intensity-based approaches, morphological processes, and histogram-based procedures are utilized to design an efficient structure. Performance evaluation is realized in experiments on 58 CT images (16 training, 16 test, and 26 validation) that include the abdomen and one or more disadvantage(s). The first part of the data (16 training images) is used to detect the pipeline's optimum parameters, while the second and third parts are utilized to evaluate and to confirm the segmentation performance. The segmentation results are presented as the means of six performance metrics. Thus, the proposed method achieves remarkable average rates for training/test/validation of 98.95/99.36/99.57% (jaccard), 99.47/99.67/99.79% (dice), 100/99.91/99.91% (sensitivity), 98.47/99.23/99.85% (specificity), 99.38/99.63/99.87% (classification accuracy), and 98.98/99.45/99.66% (precision). In summary, a statistical pipeline performing the task of abdomen segmentation is achieved that is not affected by the disadvantages, and the most detailed abdomen segmentation study is performed for the use before organ and tumor segmentation, feature extraction, and classification.

  13. Evaluation of the predictive capacity of vertical segmental tetrapolar bioimpedance for excess weight detection in adolescents.

    PubMed

    Neves, Felipe Silva; Leandro, Danielle Aparecida Barbosa; Silva, Fabiana Almeida da; Netto, Michele Pereira; Oliveira, Renata Maria Souza; Cândido, Ana Paula Carlos

    2015-01-01

    To analyze the predictive capacity of the vertical segmental tetrapolar bioimpedance apparatus in the detection of excess weight in adolescents, using tetrapolar bioelectrical impedance as a reference. This was a cross-sectional study conducted with 411 students aged between 10 and 14 years, of both genders, enrolled in public and private schools, selected by a simple and stratified random sampling process according to the gender, age, and proportion in each institution. The sample was evaluated by the anthropometric method and underwent a body composition analysis using vertical bipolar, horizontal tetrapolar, and vertical segmental tetrapolar assessment. The ROC curve was constructed based on calculations of sensitivity and specificity for each point of the different possible measurements of body fat. The statistical analysis used Student's t-test, Pearson's correlation coefficient, and McNemar's chi-squared test. Subsequently, the variables were interpreted using SPSS software, version 17.0. Of the total sample, 53.7% were girls and 46.3%, boys. Of the total, 20% and 12.5% had overweight and obesity, respectively. The body segment measurement charts showed high values of sensitivity and specificity and high areas under the ROC curve, ranging from 0.83 to 0.95 for girls and 0.92 to 0.98 for boys, suggesting a slightly higher performance for the male gender. Body fat percentage was the most efficient criterion to detect overweight, while the trunk segmental fat was the least accurate indicator. The apparatus demonstrated good performance to predict excess weight. Copyright © 2015 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.

  14. Lengths of nephron tubule segments and collecting ducts in the CD-1 mouse kidney: an ontogeny study.

    PubMed

    Walton, Sarah L; Moritz, Karen M; Bertram, John F; Singh, Reetu R

    2016-11-01

    The kidney continues to mature postnatally, with significant elongation of nephron tubules and collecting ducts to maintain fluid/electrolyte homeostasis. The aim of this project was to develop methodology to estimate lengths of specific segments of nephron tubules and collecting ducts in the CD-1 mouse kidney using a combination of immunohistochemistry and design-based stereology (vertical uniform random sections with cycloid arc test system). Lengths of tubules were determined at postnatal day 21 (P21) and 2 and 12 mo of age and also in mice fed a high-salt diet throughout adulthood. Immunohistochemistry was performed to identify individual tubule segments [aquaporin-1, proximal tubules (PT) and thin descending limbs of Henle (TDLH); uromodulin, distal tubules (DT); aquaporin-2, collecting ducts (CD)]. All tubular segments increased significantly in length between P21 and 2 mo of age (PT, 602% increase; DT, 200% increase; TDLH, 35% increase; CD, 53% increase). However, between 2 and 12 mo, a significant increase in length was only observed for PT (76% increase in length). At 12 mo of age, kidneys of mice on a high-salt diet demonstrated a 27% greater length of the TDLH, but no significant change in length was detected for PT, DT, and CD compared with the normal-salt group. Our study demonstrates an efficient method of estimating lengths of specific segments of the renal tubular system. This technique can be applied to examine structure of the renal tubules in combination with the number of glomeruli in the kidney in models of altered renal phenotype. Copyright © 2016 the American Physiological Society.

  15. International Space Station (ISS) Bacterial Filter Elements (BFEs): Filter Efficiency and Pressure Drop Testing of Returned Units

    NASA Technical Reports Server (NTRS)

    Green, Robert D.; Agui, Juan H.; Vijayakumar, R.; Berger, Gordon M.; Perry, Jay L.

    2017-01-01

    The air quality control equipment aboard the International Space Station (ISS) and future deep space exploration vehicles provide the vital function of maintaining a clean cabin environment for the crew and the hardware. This becomes a serious challenge in pressurized space compartments since no outside air ventilation is possible, and a larger particulate load is imposed on the filtration system due to lack of sedimentation. The ISS Environmental Control and Life Support (ECLS) system architecture in the U.S. Segment uses a distributed particulate filtration approach consisting of traditional High-Efficiency Particulate Air (HEPA) filters deployed at multiple locations in each U.S. Seg-ment module; these filters are referred to as Bacterial Filter Elements, or BFEs. In our previous work, we presented results of efficiency and pressure drop measurements for a sample set of two returned BFEs with a service life of 2.5 years. In this follow-on work, we present similar efficiency, pressure drop, and leak tests results for a larger sample set of six returned BFEs. The results of this work can aid the ISS Program in managing BFE logistics inventory through the stations planned lifetime as well as provide insight for managing filter element logistics for future exploration missions. These results also can provide meaningful guidance for particulate filter designs under consideration for future deep space exploration missions.

  16. Filter Efficiency and Pressure Testing of Returned ISS Bacterial Filter Elements (BFEs)

    NASA Technical Reports Server (NTRS)

    Green, Robert D.; Agui, Juan H.; Berger, Gordon M.; Vijayakumar, R.; Perry, Jay L.

    2017-01-01

    The air quality control equipment aboard the International Space Station (ISS) and future deep space exploration vehicles provide the vital function of maintaining a clean cabin environment for the crew and the hardware. This becomes a serious challenge in pressurized space compartments since no outside air ventilation is possible, and a larger particulate load is imposed on the filtration system due to lack of sedimentation. The ISS Environmental Control and Life Support (ECLS) system architecture in the U.S. Segment uses a distributed particulate filtration approach consisting of traditional High-Efficiency Particulate Air (HEPA) filters deployed at multiple locations in each U.S. Seg-ment module; these filters are referred to as Bacterial Filter Elements, or BFEs. In our previous work, we presented results of efficiency and pressure drop measurements for a sample set of two returned BFEs with a service life of 2.5 years. In this follow-on work, we present similar efficiency, pressure drop, and leak tests results for a larger sample set of six returned BFEs. The results of this work can aid the ISS Program in managing BFE logistics inventory through the stations planned lifetime as well as provide insight for managing filter element logistics for future exploration missions. These results also can provide meaningful guidance for particulate filter designs under consideration for future deep space exploration missions.

  17. Automated interpretation of 3D laserscanned point clouds for plant organ segmentation.

    PubMed

    Wahabzada, Mirwaes; Paulus, Stefan; Kersting, Kristian; Mahlein, Anne-Katrin

    2015-08-08

    Plant organ segmentation from 3D point clouds is a relevant task for plant phenotyping and plant growth observation. Automated solutions are required to increase the efficiency of recent high-throughput plant phenotyping pipelines. However, plant geometrical properties vary with time, among observation scales and different plant types. The main objective of the present research is to develop a fully automated, fast and reliable data driven approach for plant organ segmentation. The automated segmentation of plant organs using unsupervised, clustering methods is crucial in cases where the goal is to get fast insights into the data or no labeled data is available or costly to achieve. For this we propose and compare data driven approaches that are easy-to-realize and make the use of standard algorithms possible. Since normalized histograms, acquired from 3D point clouds, can be seen as samples from a probability simplex, we propose to map the data from the simplex space into Euclidean space using Aitchisons log ratio transformation, or into the positive quadrant of the unit sphere using square root transformation. This, in turn, paves the way to a wide range of commonly used analysis techniques that are based on measuring the similarities between data points using Euclidean distance. We investigate the performance of the resulting approaches in the practical context of grouping 3D point clouds and demonstrate empirically that they lead to clustering results with high accuracy for monocotyledonous and dicotyledonous plant species with diverse shoot architecture. An automated segmentation of 3D point clouds is demonstrated in the present work. Within seconds first insights into plant data can be deviated - even from non-labelled data. This approach is applicable to different plant species with high accuracy. The analysis cascade can be implemented in future high-throughput phenotyping scenarios and will support the evaluation of the performance of different plant genotypes exposed to stress or in different environmental scenarios.

  18. Comparison of high speed DI-LIGBT structures

    NASA Astrophysics Data System (ADS)

    Sunkavalli, Ravishankar; Baliga, B. Jayant

    1997-12-01

    The performance of the DI segmented collector (SC)-LIGBT is compared to the collector shorted (CS)-LIGBT. The SC-LIGBT allows for adjusting the tradeoff between switching speed and on-state voltage drop by simply changing the P+ collector segment width during device layout. In contrast to previously reported junction isolated (JI) devices, the DI SC-LIGBT was observed to have a turnoff speed similar to the CS-LIGBT with a higher forward drop than the conventional LIGBT. The on-state performance of the integral diodes of the SC-LIGBTs was found to be superior to the integral diode of the CS-LIGBT. The integral diodes of both the CS and the SC-LIGBTs were found to have much superior switching characteristics compared to a lateral PiN diode at the expense of a higher on-state voltage drop. Thus, the superior switching characteristics of the integral diode in the SC-LIGBT complements its fast switching behavior making this device attractive for compact, high frequency, high efficient, power ICs.

  19. A practical, cost-effective method for recruiting people into healthy eating behavior programs.

    PubMed

    McDonald, Paul W

    2007-04-01

    The population impact of programs designed to develop healthy eating behaviors is limited by the number of people who use them. Most public health providers and researchers rely on purchased mass media, which can be expensive, on public service announcements, or clinic-based recruitment, which can have limited reach. Few studies offer assistance for selecting high-outreach and low-cost strategies to promote healthy eating programs. The purpose of this study was 1) to determine whether classified newspaper advertising is an effective and efficient method of recruiting participants into a healthy eating program and 2) to determine whether segmenting messages by transtheoretical stage of change would help engage individuals at all levels of motivation to change their eating behavior. For 5 days in 1997, three advertisements corresponding to different stages of change were placed in a Canadian newspaper with a daily circulation of 75,000. There were 282 eligible people who responded to newspaper advertisements, and the cost was Can $1.11 (U.S. $0.72) per recruit. This cost compares favorably with the cost efficiency of mass media, direct mail, and other common promotional methods. Message type was correlated with respondent's stage of change, and this correlation suggested that attempts to send different messages to different audience segments were successful. Classified advertisements appear to be a highly cost-efficient method for recruiting a diverse range of participants into healthy eating programs and research about healthy eating.

  20. Wide Linear Corticotomy and Anterior Segmental Osteotomy Under Local Anesthesia Combined Corticision for Correcting Severe Anterior Protrusion With Insufficient Alveolar Housing.

    PubMed

    Noh, Min-Ki; Lee, Baek-Soo; Kim, Shin-Yeop; Jeon, Hyeran Helen; Kim, Seong-Hun; Nelson, Gerald

    2017-11-01

    This article presents an alternate surgical treatment method to correct a severe anterior protrusion in an adult patient with an extremely thin alveolus. To accomplish an effective and efficient anterior segmental retraction without periodontal complications, the authors performed, under local anesthesia, a wide linear corticotomy and corticision in the maxilla and an anterior segmental osteotomy in mandible. In the maxilla, a wide linear corticotomy was performed under local anesthesia. In the maxillary first premolar area, a wide section of cortical bone was removed. Retraction forces were applied buccolingually with the aid of temporary skeletal anchorage devices. Corticision was later performed to close residual extraction space. In the mandible, an anterior segmental osteotomy was performed and the first premolars were extracted under local anesthesia. In the maxilla, a wide linear corticotomy facilitated a bony block movement with temporary skeletal anchorage devices, without complications. The remaining extraction space after the bony block movement was closed effectively, accelerated by corticision. In the mandible, anterior segmental retraction was facilitated by an anterior segmental osteotomy performed under local anesthesia. Corticision was later employed to accelerate individual tooth movements. A wide linear corticotomy and an anterior segmental osteotomy combined with corticision can be an effective and efficient alternative to conventional orthodontic treatment in the bialveolar protrusion patient with an extremely thin alveolar housing.

  1. Automatic 3D kidney segmentation based on shape constrained GC-OAAM

    NASA Astrophysics Data System (ADS)

    Chen, Xinjian; Summers, Ronald M.; Yao, Jianhua

    2011-03-01

    The kidney can be classified into three main tissue types: renal cortex, renal medulla and renal pelvis (or collecting system). Dysfunction of different renal tissue types may cause different kidney diseases. Therefore, accurate and efficient segmentation of kidney into different tissue types plays a very important role in clinical research. In this paper, we propose an automatic 3D kidney segmentation method which segments the kidney into the three different tissue types: renal cortex, medulla and pelvis. The proposed method synergistically combines active appearance model (AAM), live wire (LW) and graph cut (GC) methods, GC-OAAM for short. Our method consists of two main steps. First, a pseudo 3D segmentation method is employed for kidney initialization in which the segmentation is performed slice-by-slice via a multi-object oriented active appearance model (OAAM) method. An improved iterative model refinement algorithm is proposed for the AAM optimization, which synergistically combines the AAM and LW method. Multi-object strategy is applied to help the object initialization. The 3D model constraints are applied to the initialization result. Second, the object shape information generated from the initialization step is integrated into the GC cost computation. A multi-label GC method is used to segment the kidney into cortex, medulla and pelvis. The proposed method was tested on 19 clinical arterial phase CT data sets. The preliminary results showed the feasibility and efficiency of the proposed method.

  2. Techniques for efficient, real-time, 3D visualization of multi-modality cardiac data using consumer graphics hardware.

    PubMed

    Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr

    2005-09-01

    We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.

  3. Multistage Coupling of Laser-Wakefield Accelerators with Curved Plasma Channels.

    PubMed

    Luo, J; Chen, M; Wu, W Y; Weng, S M; Sheng, Z M; Schroeder, C B; Jaroszynski, D A; Esarey, E; Leemans, W P; Mori, W B; Zhang, J

    2018-04-13

    Multistage coupling of laser-wakefield accelerators is essential to overcome laser energy depletion for high-energy applications such as TeV-level electron-positron colliders. Current staging schemes feed subsequent laser pulses into stages using plasma mirrors while controlling electron beam focusing with plasma lenses. Here a more compact and efficient scheme is proposed to realize the simultaneous coupling of the electron beam and the laser pulse into a second stage. A partly curved channel, integrating a straight acceleration stage with a curved transition segment, is used to guide a fresh laser pulse into a subsequent straight channel, while the electrons continue straight. This scheme benefits from a shorter coupling distance and continuous guiding of the electrons in plasma while suppressing transverse beam dispersion. Particle-in-cell simulations demonstrate that the electron beam from a previous stage can be efficiently injected into a subsequent stage for further acceleration while maintaining high capture efficiency, stability, and beam quality.

  4. Multistage Coupling of Laser-Wakefield Accelerators with Curved Plasma Channels

    NASA Astrophysics Data System (ADS)

    Luo, J.; Chen, M.; Wu, W. Y.; Weng, S. M.; Sheng, Z. M.; Schroeder, C. B.; Jaroszynski, D. A.; Esarey, E.; Leemans, W. P.; Mori, W. B.; Zhang, J.

    2018-04-01

    Multistage coupling of laser-wakefield accelerators is essential to overcome laser energy depletion for high-energy applications such as TeV-level electron-positron colliders. Current staging schemes feed subsequent laser pulses into stages using plasma mirrors while controlling electron beam focusing with plasma lenses. Here a more compact and efficient scheme is proposed to realize the simultaneous coupling of the electron beam and the laser pulse into a second stage. A partly curved channel, integrating a straight acceleration stage with a curved transition segment, is used to guide a fresh laser pulse into a subsequent straight channel, while the electrons continue straight. This scheme benefits from a shorter coupling distance and continuous guiding of the electrons in plasma while suppressing transverse beam dispersion. Particle-in-cell simulations demonstrate that the electron beam from a previous stage can be efficiently injected into a subsequent stage for further acceleration while maintaining high capture efficiency, stability, and beam quality.

  5. Fabrication, testing and modeling of a new flexible armor inspired from natural fish scales and osteoderms.

    PubMed

    Chintapalli, Ravi Kiran; Mirkhalaf, Mohammad; Dastjerdi, Ahmad Khayer; Barthelat, Francois

    2014-09-01

    Crocodiles, armadillo, turtles, fish and many other animal species have evolved flexible armored skins in the form of hard scales or osteoderms, which can be described as hard plates of finite size embedded in softer tissues. The individual hard segments provide protection from predators, while the relative motion of these segments provides the flexibility required for efficient locomotion. In this work, we duplicated these broad concepts in a bio-inspired segmented armor. Hexagonal segments of well-defined size and shape were carved within a thin glass plate using laser engraving. The engraved plate was then placed on a soft substrate which simulated soft tissues, and then punctured with a sharp needle mounted on a miniature loading stage. The resistance of our segmented armor was significantly higher when smaller hexagons were used, and our bio-inspired segmented glass displayed an increase in puncture resistance of up to 70% compared to a continuous plate of glass of the same thickness. Detailed structural analyses aided by finite elements revealed that this extraordinary improvement is due to the reduced span of individual segments, which decreases flexural stresses and delays fracture. This effect can however only be achieved if the plates are at least 1000 stiffer than the underlying substrate, which is the case for natural armor systems. Our bio-inspired system also displayed many of the attributes of natural armors: flexible, robust with 'multi-hit' capabilities. This new segmented glass therefore suggests interesting bio-inspired strategies and mechanisms which could be systematically exploited in high-performance flexible armors. This study also provides new insights and a better understanding of the mechanics of natural armors such as scales and osteoderms.

  6. Maximum efficiency of state-space models of nanoscale energy conversion devices

    NASA Astrophysics Data System (ADS)

    Einax, Mario; Nitzan, Abraham

    2016-07-01

    The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.

  7. Maximum efficiency of state-space models of nanoscale energy conversion devices.

    PubMed

    Einax, Mario; Nitzan, Abraham

    2016-07-07

    The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.

  8. Localized-atlas-based segmentation of breast MRI in a decision-making framework.

    PubMed

    Fooladivanda, Aida; Shokouhi, Shahriar B; Ahmadinejad, Nasrin

    2017-03-01

    Breast-region segmentation is an important step for density estimation and Computer-Aided Diagnosis (CAD) systems in Magnetic Resonance Imaging (MRI). Detection of breast-chest wall boundary is often a difficult task due to similarity between gray-level values of fibroglandular tissue and pectoral muscle. This paper proposes a robust breast-region segmentation method which is applicable for both complex cases with fibroglandular tissue connected to the pectoral muscle, and simple cases with high contrast boundaries. We present a decision-making framework based on geometric features and support vector machine (SVM) to classify breasts in two main groups, complex and simple. For complex cases, breast segmentation is done using a combination of intensity-based and atlas-based techniques; however, only intensity-based operation is employed for simple cases. A novel atlas-based method, that is called localized-atlas, accomplishes the processes of atlas construction and registration based on the region of interest (ROI). Atlas-based segmentation is performed by relying on the chest wall template. Our approach is validated using a dataset of 210 cases. Based on similarity between automatic and manual segmentation results, the proposed method achieves Dice similarity coefficient, Jaccard coefficient, total overlap, false negative, and false positive values of 96.3, 92.9, 97.4, 2.61 and 4.77%, respectively. The localization error of the breast-chest wall boundary is 1.97 mm, in terms of averaged deviation distance. The achieved results prove that the suggested framework performs the breast segmentation with negligible errors and efficient computational time for different breasts from the viewpoints of size, shape, and density pattern.

  9. Assessing hippocampal development and language in early childhood: Evidence from a new application of the Automatic Segmentation Adapter Tool.

    PubMed

    Lee, Joshua K; Nordahl, Christine W; Amaral, David G; Lee, Aaron; Solomon, Marjorie; Ghetti, Simona

    2015-11-01

    Volumetric assessments of the hippocampus and other brain structures during childhood provide useful indices of brain development and correlates of cognitive functioning in typically and atypically developing children. Automated methods such as FreeSurfer promise efficient and replicable segmentation, but may include errors which are avoided by trained manual tracers. A recently devised automated correction tool that uses a machine learning algorithm to remove systematic errors, the Automatic Segmentation Adapter Tool (ASAT), was capable of substantially improving the accuracy of FreeSurfer segmentations in an adult sample [Wang et al., 2011], but the utility of ASAT has not been examined in pediatric samples. In Study 1, the validity of FreeSurfer and ASAT corrected hippocampal segmentations were examined in 20 typically developing children and 20 children with autism spectrum disorder aged 2 and 3 years. We showed that while neither FreeSurfer nor ASAT accuracy differed by disorder or age, the accuracy of ASAT corrected segmentations were substantially better than FreeSurfer segmentations in every case, using as few as 10 training examples. In Study 2, we applied ASAT to 89 typically developing children aged 2 to 4 years to examine relations between hippocampal volume, age, sex, and expressive language. Girls had smaller hippocampi overall, and in left hippocampus this difference was larger in older than younger girls. Expressive language ability was greater in older children, and this difference was larger in those with larger hippocampi, bilaterally. Overall, this research shows that ASAT is highly reliable and useful to examinations relating behavior to hippocampal structure. © 2015 Wiley Periodicals, Inc.

  10. An Efficient, Hierarchical Viewpoint Planning Strategy for Terrestrial Laser Scanner Networks

    NASA Astrophysics Data System (ADS)

    Jia, F.; Lichti, D. D.

    2018-05-01

    Terrestrial laser scanner (TLS) techniques have been widely adopted in a variety of applications. However, unlike in geodesy or photogrammetry, insufficient attention has been paid to the optimal TLS network design. It is valuable to develop a complete design system that can automatically provide an optimal plan, especially for high-accuracy, large-volume scanning networks. To achieve this goal, one should look at the "optimality" of the solution as well as the computational complexity in reaching it. In this paper, a hierarchical TLS viewpoint planning strategy is developed to solve the optimal scanner placement problems. If one targeted object to be scanned is simplified as discretized wall segments, any possible viewpoint can be evaluated by a score table representing its visible segments under certain scanning geometry constraints. Thus, the design goal is to find a minimum number of viewpoints that achieves complete coverage of all wall segments. The efficiency is improved by densifying viewpoints hierarchically, instead of a "brute force" search within the entire workspace. The experiment environments in this paper were simulated from two buildings located on University of Calgary campus. Compared with the "brute force" strategy in terms of the quality of the solutions and the runtime, it is shown that the proposed strategy can provide a scanning network with a compatible quality but with more than a 70 % time saving.

  11. Design oriented structural analysis

    NASA Technical Reports Server (NTRS)

    Giles, Gary L.

    1994-01-01

    Desirable characteristics and benefits of design oriented analysis methods are described and illustrated by presenting a synoptic description of the development and uses of the Equivalent Laminated Plate Solution (ELAPS) computer code. ELAPS is a design oriented structural analysis method which is intended for use in the early design of aircraft wing structures. Model preparation is minimized by using a few large plate segments to model the wing box structure. Computational efficiency is achieved by using a limited number of global displacement functions that encompass all segments over the wing planform. Coupling with other codes is facilitated since the output quantities such as deflections and stresses are calculated as continuous functions over the plate segments. Various aspects of the ELAPS development are discussed including the analytical formulation, verification of results by comparison with finite element analysis results, coupling with other codes, and calculation of sensitivity derivatives. The effectiveness of ELAPS for multidisciplinary design application is illustrated by describing its use in design studies of high speed civil transport wing structures.

  12. Monolithic stationary phases with a longitudinal gradient of porosity.

    PubMed

    Urban, Jiří; Hájek, Tomáš; Svec, Frantisek

    2017-04-01

    The duration of the hypercrosslinking reaction has been used to control the extent of small pores formation in polymer-based monolithic stationary phases. Segments of five columns hypercrosslinked for 30-360 min were coupled via zero-volume unions to prepare columns with segmented porosity gradients. The steepness of the porosity gradient affected column efficiency, mass transfer resistance, and separation of both small-molecule alkylbenzenes and high-molar-mass polystyrene standards. In addition, the segmented column with the steepest porosity gradient was prepared as a single column with a continuous porosity gradient. The steepness of porosity gradient in this type column was tuned. Compared to a completely hypercrosslinked column, the column with the shallower gradient produced comparable size-exclusion separation of polystyrene standards but allowed higher column permeability. The completely hypercrosslinked column and the column with porosity gradient were successfully coupled in online two-dimensional liquid chromatography of polymers. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Color image segmentation with support vector machines: applications to road signs detection.

    PubMed

    Cyganek, Bogusław

    2008-08-01

    In this paper we propose efficient color segmentation method which is based on the Support Vector Machine classifier operating in a one-class mode. The method has been developed especially for the road signs recognition system, although it can be used in other applications. The main advantage of the proposed method comes from the fact that the segmentation of characteristic colors is performed not in the original but in the higher dimensional feature space. By this a better data encapsulation with a linear hypersphere can be usually achieved. Moreover, the classifier does not try to capture the whole distribution of the input data which is often difficult to achieve. Instead, the characteristic data samples, called support vectors, are selected which allow construction of the tightest hypersphere that encloses majority of the input data. Then classification of a test data simply consists in a measurement of its distance to a centre of the found hypersphere. The experimental results show high accuracy and speed of the proposed method.

  14. Speech Rhythms and Multiplexed Oscillatory Sensory Coding in the Human Brain

    PubMed Central

    Gross, Joachim; Hoogenboom, Nienke; Thut, Gregor; Schyns, Philippe; Panzeri, Stefano; Belin, Pascal; Garrod, Simon

    2013-01-01

    Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG) to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta) and the amplitude of high-frequency (gamma) oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex) attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations. PMID:24391472

  15. Proton exchange membrane materials for the advancement of direct methanol fuel-cell technology

    DOEpatents

    Cornelius, Christopher J [Albuquerque, NM

    2006-04-04

    A new class of hybrid organic-inorganic materials, and methods of synthesis, that can be used as a proton exchange membrane in a direct methanol fuel cell. In contrast with Nafion.RTM. PEM materials, which have random sulfonation, the new class of materials have ordered sulfonation achieved through self-assembly of alternating polyimide segments of different molecular weights comprising, for example, highly sulfonated hydrophilic PDA-DASA polyimide segment alternating with an unsulfonated hydrophobic 6FDA-DAS polyimide segment. An inorganic phase, e.g., 0.5 5 wt % TEOS, can be incorporated in the sulfonated polyimide copolymer to further improve its properties. The new materials exhibit reduced swelling when exposed to water, increased thermal stability, and decreased O.sub.2 and H.sub.2 gas permeability, while retaining proton conductivities similar to Nafion.RTM.. These improved properties may allow direct methanol fuel cells to operate at higher temperatures and with higher efficiencies due to reduced methanol crossover.

  16. Learning from Demonstration: Generalization via Task Segmentation

    NASA Astrophysics Data System (ADS)

    Ettehadi, N.; Manaffam, S.; Behal, A.

    2017-10-01

    In this paper, a motion segmentation algorithm design is presented with the goal of segmenting a learned trajectory from demonstration such that each segment is locally maximally different from its neighbors. This segmentation is then exploited to appropriately scale (dilate/squeeze and/or rotate) a nominal trajectory learned from a few demonstrations on a fixed experimental setup such that it is applicable to different experimental settings without expanding the dataset and/or retraining the robot. The algorithm is computationally efficient in the sense that it allows facile transition between different environments. Experimental results using the Baxter robotic platform showcase the ability of the algorithm to accurately transfer a feeding task.

  17. A segmentation algorithm based on image projection for complex text layout

    NASA Astrophysics Data System (ADS)

    Zhu, Wangsheng; Chen, Qin; Wei, Chuanyi; Li, Ziyang

    2017-10-01

    Segmentation algorithm is an important part of layout analysis, considering the efficiency advantage of the top-down approach and the particularity of the object, a breakdown of projection layout segmentation algorithm. Firstly, the algorithm will algorithm first partitions the text image, and divided into several columns, then for each column scanning projection, the text image is divided into several sub regions through multiple projection. The experimental results show that, this method inherits the projection itself and rapid calculation speed, but also can avoid the effect of arc image information page segmentation, and also can accurate segmentation of the text image layout is complex.

  18. Sarment: Python modules for HMM analysis and partitioning of sequences.

    PubMed

    Guéguen, Laurent

    2005-08-15

    Sarment is a package of Python modules for easy building and manipulation of sequence segmentations. It provides efficient implementation of usual algorithms for hidden Markov Model computation, as well as for maximal predictive partitioning. Owing to its very large variety of criteria for computing segmentations, Sarment can handle many kinds of models. Because of object-oriented programming, the results of the segmentation are very easy tomanipulate.

  19. A promising limited angular computed tomography reconstruction via segmentation based regional enhancement and total variation minimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Wenkun; Zhang, Hanming; Li, Lei

    2016-08-15

    X-ray computed tomography (CT) is a powerful and common inspection technique used for the industrial non-destructive testing. However, large-sized and heavily absorbing objects cause the formation of artifacts because of either the lack of specimen penetration in specific directions or the acquisition of data from only a limited angular range of views. Although the sparse optimization-based methods, such as the total variation (TV) minimization method, can suppress artifacts to some extent, reconstructing the images such that they converge to accurate values remains difficult because of the deficiency in continuous angular data and inconsistency in the projections. To address this problem,more » we use the idea of regional enhancement of the true values and suppression of the illusory artifacts outside the region to develop an efficient iterative algorithm. This algorithm is based on the combination of regional enhancement of the true values and TV minimization for the limited angular reconstruction. In this algorithm, the segmentation approach is introduced to distinguish the regions of different image knowledge and generate the support mask of the image. A new regularization term, which contains the support knowledge to enhance the true values of the image, is incorporated into the objective function. Then, the proposed optimization model is solved by variable splitting and the alternating direction method efficiently. A compensation approach is also designed to extract useful information from the initial projections and thus reduce false segmentation result and correct the segmentation support and the segmented image. The results obtained from comparing both simulation studies and real CT data set reconstructions indicate that the proposed algorithm generates a more accurate image than do the other reconstruction methods. The experimental results show that this algorithm can produce high-quality reconstructed images for the limited angular reconstruction and suppress the illusory artifacts caused by the deficiency in valid data.« less

  20. A promising limited angular computed tomography reconstruction via segmentation based regional enhancement and total variation minimization

    NASA Astrophysics Data System (ADS)

    Zhang, Wenkun; Zhang, Hanming; Li, Lei; Wang, Linyuan; Cai, Ailong; Li, Zhongguo; Yan, Bin

    2016-08-01

    X-ray computed tomography (CT) is a powerful and common inspection technique used for the industrial non-destructive testing. However, large-sized and heavily absorbing objects cause the formation of artifacts because of either the lack of specimen penetration in specific directions or the acquisition of data from only a limited angular range of views. Although the sparse optimization-based methods, such as the total variation (TV) minimization method, can suppress artifacts to some extent, reconstructing the images such that they converge to accurate values remains difficult because of the deficiency in continuous angular data and inconsistency in the projections. To address this problem, we use the idea of regional enhancement of the true values and suppression of the illusory artifacts outside the region to develop an efficient iterative algorithm. This algorithm is based on the combination of regional enhancement of the true values and TV minimization for the limited angular reconstruction. In this algorithm, the segmentation approach is introduced to distinguish the regions of different image knowledge and generate the support mask of the image. A new regularization term, which contains the support knowledge to enhance the true values of the image, is incorporated into the objective function. Then, the proposed optimization model is solved by variable splitting and the alternating direction method efficiently. A compensation approach is also designed to extract useful information from the initial projections and thus reduce false segmentation result and correct the segmentation support and the segmented image. The results obtained from comparing both simulation studies and real CT data set reconstructions indicate that the proposed algorithm generates a more accurate image than do the other reconstruction methods. The experimental results show that this algorithm can produce high-quality reconstructed images for the limited angular reconstruction and suppress the illusory artifacts caused by the deficiency in valid data.

  1. A comprehensive segmentation analysis of crude oil market based on time irreversibility

    NASA Astrophysics Data System (ADS)

    Xia, Jianan; Shang, Pengjian; Lu, Dan; Yin, Yi

    2016-05-01

    In this paper, we perform a comprehensive entropic segmentation analysis of crude oil future prices from 1983 to 2014 which used the Jensen-Shannon divergence as the statistical distance between segments, and analyze the results from original series S and series begin at 1986 (marked as S∗) to find common segments which have same boundaries. Then we apply time irreversibility analysis of each segment to divide all segments into two groups according to their asymmetry degree. Based on the temporal distribution of the common segments and high asymmetry segments, we figure out that these two types of segments appear alternately and do not overlap basically in daily group, while the common portions are also high asymmetry segments in weekly group. In addition, the temporal distribution of the common segments is fairly close to the time of crises, wars or other events, because the hit from severe events to oil price makes these common segments quite different from their adjacent segments. The common segments can be confirmed in daily group series, or weekly group series due to the large divergence between common segments and their neighbors. While the identification of high asymmetry segments is helpful to know the segments which are not affected badly by the events and can recover to steady states automatically. Finally, we rearrange the segments by merging the connected common segments or high asymmetry segments into a segment, and conjoin the connected segments which are neither common nor high asymmetric.

  2. A region-based segmentation method for ultrasound images in HIFU therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Dong, E-mail: dongz@whu.edu.cn; Liu, Yu; Yang, Yan

    Purpose: Precisely and efficiently locating a tumor with less manual intervention in ultrasound-guided high-intensity focused ultrasound (HIFU) therapy is one of the keys to guaranteeing the therapeutic result and improving the efficiency of the treatment. The segmentation of ultrasound images has always been difficult due to the influences of speckle, acoustic shadows, and signal attenuation as well as the variety of tumor appearance. The quality of HIFU guidance images is even poorer than that of conventional diagnostic ultrasound images because the ultrasonic probe used for HIFU guidance usually obtains images without making contact with the patient’s body. Therefore, the segmentationmore » becomes more difficult. To solve the segmentation problem of ultrasound guidance image in the treatment planning procedure for HIFU therapy, a novel region-based segmentation method for uterine fibroids in HIFU guidance images is proposed. Methods: Tumor partitioning in HIFU guidance image without manual intervention is achieved by a region-based split-and-merge framework. A new iterative multiple region growing algorithm is proposed to first split the image into homogenous regions (superpixels). The features extracted within these homogenous regions will be more stable than those extracted within the conventional neighborhood of a pixel. The split regions are then merged by a superpixel-based adaptive spectral clustering algorithm. To ensure the superpixels that belong to the same tumor can be clustered together in the merging process, a particular construction strategy for the similarity matrix is adopted for the spectral clustering, and the similarity matrix is constructed by taking advantage of a combination of specifically selected first-order and second-order texture features computed from the gray levels and the gray level co-occurrence matrixes, respectively. The tumor region is picked out automatically from the background regions by an algorithm according to a priori information about the tumor position, shape, and size. Additionally, an appropriate cluster number for spectral clustering can be determined by the same algorithm, thus the automatic segmentation of the tumor region is achieved. Results: To evaluate the performance of the proposed method, 50 uterine fibroid ultrasound images from different patients receiving HIFU therapy were segmented, and the obtained tumor contours were compared with those delineated by an experienced radiologist. For area-based evaluation results, the mean values of the true positive ratio, the false positive ratio, and the similarity were 94.42%, 4.71%, and 90.21%, respectively, and the corresponding standard deviations were 2.54%, 3.12%, and 3.50%, respectively. For distance-based evaluation results, the mean values of the normalized Hausdorff distance and the normalized mean absolute distance were 4.93% and 0.90%, respectively, and the corresponding standard deviations were 2.22% and 0.34%, respectively. The running time of the segmentation process was 12.9 s for a 318 × 333 (pixels) image. Conclusions: Experiments show that the proposed method can segment the tumor region accurately and efficiently with less manual intervention, which provides for the possibility of automatic segmentation and real-time guidance in HIFU therapy.« less

  3. PaCeQuant: A Tool for High-Throughput Quantification of Pavement Cell Shape Characteristics1[OPEN

    PubMed Central

    Poeschl, Yvonne; Plötner, Romina

    2017-01-01

    Pavement cells (PCs) are the most frequently occurring cell type in the leaf epidermis and play important roles in leaf growth and function. In many plant species, PCs form highly complex jigsaw-puzzle-shaped cells with interlocking lobes. Understanding of their development is of high interest for plant science research because of their importance for leaf growth and hence for plant fitness and crop yield. Studies of PC development, however, are limited, because robust methods are lacking that enable automatic segmentation and quantification of PC shape parameters suitable to reflect their cellular complexity. Here, we present our new ImageJ-based tool, PaCeQuant, which provides a fully automatic image analysis workflow for PC shape quantification. PaCeQuant automatically detects cell boundaries of PCs from confocal input images and enables manual correction of automatic segmentation results or direct import of manually segmented cells. PaCeQuant simultaneously extracts 27 shape features that include global, contour-based, skeleton-based, and PC-specific object descriptors. In addition, we included a method for classification and analysis of lobes at two-cell junctions and three-cell junctions, respectively. We provide an R script for graphical visualization and statistical analysis. We validated PaCeQuant by extensive comparative analysis to manual segmentation and existing quantification tools and demonstrated its usability to analyze PC shape characteristics during development and between different genotypes. PaCeQuant thus provides a platform for robust, efficient, and reproducible quantitative analysis of PC shape characteristics that can easily be applied to study PC development in large data sets. PMID:28931626

  4. Tone classification of syllable-segmented Thai speech based on multilayer perception

    NASA Astrophysics Data System (ADS)

    Satravaha, Nuttavudh; Klinkhachorn, Powsiri; Lass, Norman

    2002-05-01

    Thai is a monosyllabic tonal language that uses tone to convey lexical information about the meaning of a syllable. Thus to completely recognize a spoken Thai syllable, a speech recognition system not only has to recognize a base syllable but also must correctly identify a tone. Hence, tone classification of Thai speech is an essential part of a Thai speech recognition system. Thai has five distinctive tones (``mid,'' ``low,'' ``falling,'' ``high,'' and ``rising'') and each tone is represented by a single fundamental frequency (F0) pattern. However, several factors, including tonal coarticulation, stress, intonation, and speaker variability, affect the F0 pattern of a syllable in continuous Thai speech. In this study, an efficient method for tone classification of syllable-segmented Thai speech, which incorporates the effects of tonal coarticulation, stress, and intonation, as well as a method to perform automatic syllable segmentation, were developed. Acoustic parameters were used as the main discriminating parameters. The F0 contour of a segmented syllable was normalized by using a z-score transformation before being presented to a tone classifier. The proposed system was evaluated on 920 test utterances spoken by 8 speakers. A recognition rate of 91.36% was achieved by the proposed system.

  5. Efficient detection of wound-bed and peripheral skin with statistical colour models.

    PubMed

    Veredas, Francisco J; Mesa, Héctor; Morente, Laura

    2015-04-01

    A pressure ulcer is a clinical pathology of localised damage to the skin and underlying tissue caused by pressure, shear or friction. Reliable diagnosis supported by precise wound evaluation is crucial in order to success on treatment decisions. This paper presents a computer-vision approach to wound-area detection based on statistical colour models. Starting with a training set consisting of 113 real wound images, colour histogram models are created for four different tissue types. Back-projections of colour pixels on those histogram models are used, from a Bayesian perspective, to get an estimate of the posterior probability of a pixel to belong to any of those tissue classes. Performance measures obtained from contingency tables based on a gold standard of segmented images supplied by experts have been used for model selection. The resulting fitted model has been validated on a training set consisting of 322 wound images manually segmented and labelled by expert clinicians. The final fitted segmentation model shows robustness and gives high mean performance rates [(AUC: .9426 (SD .0563); accuracy: .8777 (SD .0799); F-score: 0.7389 (SD .1550); Cohen's kappa: .6585 (SD .1787)] when segmenting significant wound areas that include healing tissues.

  6. High-fidelity and low-latency mobile fronthaul based on segment-wise TDM and MIMO-interleaved arraying.

    PubMed

    Li, Longsheng; Bi, Meihua; Miao, Xin; Fu, Yan; Hu, Weisheng

    2018-01-22

    In this paper, we firstly demonstrate an advanced arraying scheme in the TDM-based analog mobile fronthaul system to enhance the signal fidelity, in which the segment of the antenna carrier signal (AxC) with an appropriate length is served as the granularity for TDM aggregation. Without introducing extra processing, the entire system can be realized by simple DSP. The theoretical analysis is presented to verify the feasibility of this scheme, and to evaluate its effectiveness, the experiment with ~7-GHz bandwidth and 20 8 × 8 MIMO group signals are conducted. Results show that the segment-wise TDM is completely compatible with the MIMO-interleaved arraying, which is employed in an existing TDM scheme to improve the bandwidth efficiency. Moreover, compared to the existing TDM schemes, our scheme can not only satisfy the latency requirement of 5G but also significantly reduce the multiplexed signal bandwidth, hence providing higher signal fidelity in the bandwidth-limited fronthaul system. The experimental result of EVM verifies that 256-QAM is supportable using the segment-wise TDM arraying with only 250-ns latency, while with the ordinary TDM arraying, only 64-QAM is bearable.

  7. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  8. Segmentation of Planar Surfaces from Laser Scanning Data Using the Magnitude of Normal Position Vector for Adaptive Neighborhoods.

    PubMed

    Kim, Changjae; Habib, Ayman; Pyeon, Muwook; Kwon, Goo-rak; Jung, Jaehoon; Heo, Joon

    2016-01-22

    Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1) reduces the dimensions of the attribute space; (2) considers the attribute similarity and the proximity of the laser point simultaneously; and (3) works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes' high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information.

  9. Segmentation of Planar Surfaces from Laser Scanning Data Using the Magnitude of Normal Position Vector for Adaptive Neighborhoods

    PubMed Central

    Kim, Changjae; Habib, Ayman; Pyeon, Muwook; Kwon, Goo-rak; Jung, Jaehoon; Heo, Joon

    2016-01-01

    Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1) reduces the dimensions of the attribute space; (2) considers the attribute similarity and the proximity of the laser point simultaneously; and (3) works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes’ high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information. PMID:26805849

  10. Efficient graph-cut tattoo segmentation

    NASA Astrophysics Data System (ADS)

    Kim, Joonsoo; Parra, Albert; Li, He; Delp, Edward J.

    2015-03-01

    Law enforcement is interested in exploiting tattoos as an information source to identify, track and prevent gang-related crimes. Many tattoo image retrieval systems have been described. In a retrieval system tattoo segmentation is an important step for retrieval accuracy since segmentation removes background information in a tattoo image. Existing segmentation methods do not extract the tattoo very well when the background includes textures and color similar to skin tones. In this paper we describe a tattoo segmentation approach by determining skin pixels in regions near the tattoo. In these regions graph-cut segmentation using a skin color model and a visual saliency map is used to find skin pixels. After segmentation we determine which set of skin pixels are connected with each other that form a closed contour including a tattoo. The regions surrounded by the closed contours are considered tattoo regions. Our method segments tattoos well when the background includes textures and color similar to skin.

  11. Segmentation editing improves efficiency while reducing inter-expert variation and maintaining accuracy for normal brain tissues in the presence of space-occupying lesions

    PubMed Central

    Deeley, MA; Chen, A; Datteri, R; Noble, J; Cmelak, A; Donnelly, EF; Malcolm, A; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Dawant, BM

    2013-01-01

    Image segmentation has become a vital and often rate limiting step in modern radiotherapy treatment planning. In recent years the pace and scope of algorithm development, and even introduction into the clinic, have far exceeded evaluative studies. In this work we build upon our previous evaluation of a registration driven segmentation algorithm in the context of 8 expert raters and 20 patients who underwent radiotherapy for large space-occupying tumors in the brain. In this work we tested four hypotheses concerning the impact of manual segmentation editing in a randomized single-blinded study. We tested these hypotheses on the normal structures of the brainstem, optic chiasm, eyes and optic nerves using the Dice similarity coefficient, volume, and signed Euclidean distance error to evaluate the impact of editing on inter-rater variance and accuracy. Accuracy analyses relied on two simulated ground truth estimation methods: STAPLE and a novel implementation of probability maps. The experts were presented with automatic, their own, and their peers’ segmentations from our previous study to edit. We found, independent of source, editing reduced inter-rater variance while maintaining or improving accuracy and improving efficiency with at least 60% reduction in contouring time. In areas where raters performed poorly contouring from scratch, editing of the automatic segmentations reduced the prevalence of total anatomical miss from approximately 16% to 8% of the total slices contained within the ground truth estimations. These findings suggest that contour editing could be useful for consensus building such as in developing delineation standards, and that both automated methods and even perhaps less sophisticated atlases could improve efficiency, inter-rater variance, and accuracy. PMID:23685866

  12. Copy number aberrations landscape of a breast tumor, connection with the efficiency of neoadjuvant chemotherapy

    NASA Astrophysics Data System (ADS)

    Ibragimova, M. K.; Tsyganov, M. M.; Slonimskaya, E. M.; Litviakov, N. V.

    2017-09-01

    The research involved 80 patients diagnosed with breast cancer (BC). Each patient had their tumor biopsy material sampled before their treatment. We studied the tumor tissue using the CytoScan HD Array (Affymetrix, USA) microarray to evaluate the CNA landscape. We studied the frequency of segmental and numerical CNA occurrence, their association with the efficiency of neoadjuvant chemotherapy (NAC). We found that the biggest number of amplifications (with frequency over 60%) were found on in the following locuses; 1q32.1 1q32.3, 1q42.13, 1q42.2, 1q43. The biggest frequency of deletions (more than in 58% of the patients) was found in these locuses: 16q21, 16q23.2, 16q23.3, 17p12, 17p13.1. However, we found the locuses with full absence of segmental chromosome anomalies. We observed trisomy most frequently in the 7, 8, 12, and 17 chromosomes, and monosomy in the 3, 4, 9, 11, 18, and X-chromosomes. We demonstrated the connection between the high frequency of cytobands with CNA in the patients' tumors and the efficiency of NAC. We also identified the cytobands, whose CNA are linked to the response to NAC.

  13. Development and validation of segmentation and interpolation techniques in sinograms for metal artifact suppression in CT.

    PubMed

    Veldkamp, Wouter J H; Joemai, Raoul M S; van der Molen, Aart J; Geleijns, Jacob

    2010-02-01

    Metal prostheses cause artifacts in computed tomography (CT) images. The purpose of this work was to design an efficient and accurate metal segmentation in raw data to achieve artifact suppression and to improve CT image quality for patients with metal hip or shoulder prostheses. The artifact suppression technique incorporates two steps: metal object segmentation in raw data and replacement of the segmented region by new values using an interpolation scheme, followed by addition of the scaled metal signal intensity. Segmentation of metal is performed directly in sinograms, making it efficient and different from current methods that perform segmentation in reconstructed images in combination with Radon transformations. Metal signal segmentation is achieved by using a Markov random field model (MRF). Three interpolation methods are applied and investigated. To provide a proof of concept, CT data of five patients with metal implants were included in the study, as well as CT data of a PMMA phantom with Teflon, PVC, and titanium inserts. Accuracy was determined quantitatively by comparing mean Hounsfield (HU) values and standard deviation (SD) as a measure of distortion in phantom images with titanium (original and suppressed) and without titanium insert. Qualitative improvement was assessed by comparing uncorrected clinical images with artifact suppressed images. Artifacts in CT data of a phantom and five patients were automatically suppressed. The general visibility of structures clearly improved. In phantom images, the technique showed reduced SD close to the SD for the case where titanium was not inserted, indicating improved image quality. HU values in corrected images were different from expected values for all interpolation methods. Subtle differences between interpolation methods were found. The new artifact suppression design is efficient, for instance, in terms of preserving spatial resolution, as it is applied directly to original raw data. It successfully reduced artifacts in CT images of five patients and in phantom images. Sophisticated interpolation methods are needed to obtain reliable HU values close to the prosthesis.

  14. Improving Spleen Volume Estimation via Computer Assisted Segmentation on Clinically Acquired CT Scans

    PubMed Central

    Xu, Zhoubing; Gertz, Adam L.; Burke, Ryan P.; Bansal, Neil; Kang, Hakmook; Landman, Bennett A.; Abramson, Richard G.

    2016-01-01

    OBJECTIVES Multi-atlas fusion is a promising approach for computer-assisted segmentation of anatomical structures. The purpose of this study was to evaluate the accuracy and time efficiency of multi-atlas segmentation for estimating spleen volumes on clinically-acquired CT scans. MATERIALS AND METHODS Under IRB approval, we obtained 294 deidentified (HIPAA-compliant) abdominal CT scans on 78 subjects from a recent clinical trial. We compared five pipelines for obtaining splenic volumes: Pipeline 1–manual segmentation of all scans, Pipeline 2–automated segmentation of all scans, Pipeline 3–automated segmentation of all scans with manual segmentation for outliers on a rudimentary visual quality check, Pipelines 4 and 5–volumes derived from a unidimensional measurement of craniocaudal spleen length and three-dimensional splenic index measurements, respectively. Using Pipeline 1 results as ground truth, the accuracy of Pipelines 2–5 (Dice similarity coefficient [DSC], Pearson correlation, R-squared, and percent and absolute deviation of volume from ground truth) were compared for point estimates of splenic volume and for change in splenic volume over time. Time cost was also compared for Pipelines 1–5. RESULTS Pipeline 3 was dominant in terms of both accuracy and time cost. With a Pearson correlation coefficient of 0.99, average absolute volume deviation 23.7 cm3, and 1 minute per scan, Pipeline 3 yielded the best results. The second-best approach was Pipeline 5, with a Pearson correlation coefficient 0.98, absolute deviation 46.92 cm3, and 1 minute 30 seconds per scan. Manual segmentation (Pipeline 1) required 11 minutes per scan. CONCLUSION A computer-automated segmentation approach with manual correction of outliers generated accurate splenic volumes with reasonable time efficiency. PMID:27519156

  15. Sequence and structure determinants of Drosophila Hsp70 mRNA translation: 5'UTR secondary structure specifically inhibits heat shock protein mRNA translation.

    PubMed Central

    Hess, M A; Duncan, R F

    1996-01-01

    Preferential translation of Drosophila heat shock protein 70 (Hsp70) mRNA requires only the 5'-untranslated region (5'-UTR). The sequence of this region suggests that it has relatively little secondary structure, which may facilitate efficient protein synthesis initiation. To determine whether minimal 5'-UTR secondary structure is required for preferential translation during heat shock, the effect of introducing stem-loops into the Hsp70 mRNA 5'-UTR was measured. Stem-loops of -11 kcal/mol abolished translation during heat shock, but did not reduce translation in non-heat shocked cells. A -22 kcal/mol stem-loop was required to comparably inhibit translation during growth at normal temperatures. To investigate whether specific sequence elements are also required for efficient preferential translation, deletion and mutation analyses were conducted in a truncated Hsp70 5'-UTR containing only the cap-proximal and AUG-proximal segments. Linker-scanner mutations in the cap-proximal segment (+1 to +37) did not impair translation. Re-ordering the segments reduced mRNA translational efficiency by 50%. Deleting the AUG-proximal segment severely inhibited translation. A 5-extension of the full-length leader specifically impaired heat shock translation. These results indicate that heat shock reduces the capacity to unwind 5-UTR secondary structure, allowing only mRNAs with minimal 5'-UTR secondary structure to be efficiently translated. A function for specific sequences is also suggested. PMID:8710519

  16. Charge Collection Efficiency in a segmented semiconductor detector interstrip region

    NASA Astrophysics Data System (ADS)

    Alarcon-Diez, V.; Vickridge, I.; Jakšić, M.; Grilj, V.; Schmidt, B.; Lange, H.

    2017-09-01

    Charged particle semiconductor detectors have been used in Ion Beam Analysis (IBA) for over four decades without great changes in either design or fabrication. However one area where improvement is desirable would be to increase the detector solid angle so as to improve spectrum statistics for a given incident beam fluence. This would allow the use of very low fluences opening the way, for example, to increase the time resolution in real-time RBS or for analysis of materials that are highly sensitive to beam damage. In order to achieve this goal without incurring the costs of degraded resolution due to kinematic broadening or large detector capacitance, a single-chip segmented detector (SEGDET) was designed and built within the SPIRIT EU infrastructure project. In this work we present the Charge Collection Efficiency (CCE) in the vicinity between two adjacent segments focusing on the interstrip zone. Microbeam Ion Beam Induced Charge (IBIC) measurements with different ion masses and energies were used to perform X-Y mapping of (CCE), as a function of detector operating conditions (bias voltage changes, detector housing possibilities and guard ring configuration). We show the (CCE) in the edge region of the active area and have also mapped the charge from the interstrip region, shared between adjacent segments. The results indicate that the electrical extent of the interstrip region is very close to the physical extent of the interstrip and guard ring structure with interstrip impacts contributing very little to the complete spectrum. The interstrip contributions to the spectra that do occur, can be substantially reduced by an offline anti-coincidence criterion applied to list mode data, which should also be easy to implement directly in the data acquisition software.

  17. Brain tumor segmentation using holistically nested neural networks in MRI images.

    PubMed

    Zhuge, Ying; Krauze, Andra V; Ning, Holly; Cheng, Jason Y; Arora, Barbara C; Camphausen, Kevin; Miller, Robert W

    2017-10-01

    Gliomas are rapidly progressive, neurologically devastating, largely fatal brain tumors. Magnetic resonance imaging (MRI) is a widely used technique employed in the diagnosis and management of gliomas in clinical practice. MRI is also the standard imaging modality used to delineate the brain tumor target as part of treatment planning for the administration of radiation therapy. Despite more than 20 yr of research and development, computational brain tumor segmentation in MRI images remains a challenging task. We are presenting a novel method of automatic image segmentation based on holistically nested neural networks that could be employed for brain tumor segmentation of MRI images. Two preprocessing techniques were applied to MRI images. The N4ITK method was employed for correction of bias field distortion. A novel landmark-based intensity normalization method was developed so that tissue types have a similar intensity scale in images of different subjects for the same MRI protocol. The holistically nested neural networks (HNN), which extend from the convolutional neural networks (CNN) with a deep supervision through an additional weighted-fusion output layer, was trained to learn the multiscale and multilevel hierarchical appearance representation of the brain tumor in MRI images and was subsequently applied to produce a prediction map of the brain tumor on test images. Finally, the brain tumor was obtained through an optimum thresholding on the prediction map. The proposed method was evaluated on both the Multimodal Brain Tumor Image Segmentation (BRATS) Benchmark 2013 training datasets, and clinical data from our institute. A dice similarity coefficient (DSC) and sensitivity of 0.78 and 0.81 were achieved on 20 BRATS 2013 training datasets with high-grade gliomas (HGG), based on a two-fold cross-validation. The HNN model built on the BRATS 2013 training data was applied to ten clinical datasets with HGG from a locally developed database. DSC and sensitivity of 0.83 and 0.85 were achieved. A quantitative comparison indicated that the proposed method outperforms the popular fully convolutional network (FCN) method. In terms of efficiency, the proposed method took around 10 h for training with 50,000 iterations, and approximately 30 s for testing of a typical MRI image in the BRATS 2013 dataset with a size of 160 × 216 × 176, using a DELL PRECISION workstation T7400, with an NVIDIA Tesla K20c GPU. An effective brain tumor segmentation method for MRI images based on a HNN has been developed. The high level of accuracy and efficiency make this method practical in brain tumor segmentation. It may play a crucial role in both brain tumor diagnostic analysis and in the treatment planning of radiation therapy. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  18. 16,000-rpm Interior Permanent Magnet Reluctance Machine with Brushless Field Excitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, John S; Burress, Timothy A; Lee, Seong T

    2008-01-01

    This paper introduces a high speed brushless field excitation (BFE) machine that offers high torque per ampere (A) per core length at low speed and weakened flux at high speed. Lower core losses at high speeds, are attained by reducing the field excitation. Safety and reliability are increased by weakening the field when a winding short-circuit fault occurs. For a high-speed motor the bridges that link the rotor punching segments together must be thickened for mechanical integrity; BFE can ensure sufficient rotor flux when needed. Projected efficiency map including losses of the excitation coils confirms the advantage of this technology.

  19. Deep learning for medical image segmentation - using the IBM TrueNorth neurosynaptic system

    NASA Astrophysics Data System (ADS)

    Moran, Steven; Gaonkar, Bilwaj; Whitehead, William; Wolk, Aidan; Macyszyn, Luke; Iyer, Subramanian S.

    2018-03-01

    Deep convolutional neural networks have found success in semantic image segmentation tasks in computer vision and medical imaging. These algorithms are executed on conventional von Neumann processor architectures or GPUs. This is suboptimal. Neuromorphic processors that replicate the structure of the brain are better-suited to train and execute deep learning models for image segmentation by relying on massively-parallel processing. However, given that they closely emulate the human brain, on-chip hardware and digital memory limitations also constrain them. Adapting deep learning models to execute image segmentation tasks on such chips, requires specialized training and validation. In this work, we demonstrate for the first-time, spinal image segmentation performed using a deep learning network implemented on neuromorphic hardware of the IBM TrueNorth Neurosynaptic System and validate the performance of our network by comparing it to human-generated segmentations of spinal vertebrae and disks. To achieve this on neuromorphic hardware, the training model constrains the coefficients of individual neurons to {-1,0,1} using the Energy Efficient Deep Neuromorphic (EEDN)1 networks training algorithm. Given the 1 million neurons and 256 million synapses, the scale and size of the neural network implemented by the IBM TrueNorth allows us to execute the requisite mapping between segmented images and non-uniform intensity MR images >20 times faster than on a GPU-accelerated network and using <0.1 W. This speed and efficiency implies that a trained neuromorphic chip can be deployed in intra-operative environments where real-time medical image segmentation is necessary.

  20. User-guided segmentation for volumetric retinal optical coherence tomography images

    PubMed Central

    Yin, Xin; Chao, Jennifer R.; Wang, Ruikang K.

    2014-01-01

    Abstract. Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method. PMID:25147962

  1. User-guided segmentation for volumetric retinal optical coherence tomography images.

    PubMed

    Yin, Xin; Chao, Jennifer R; Wang, Ruikang K

    2014-08-01

    Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method.

  2. 3D prostate TRUS segmentation using globally optimized volume-preserving prior.

    PubMed

    Qiu, Wu; Rajchl, Martin; Guo, Fumin; Sun, Yue; Ukwatta, Eranga; Fenster, Aaron; Yuan, Jing

    2014-01-01

    An efficient and accurate segmentation of 3D transrectal ultrasound (TRUS) images plays an important role in the planning and treatment of the practical 3D TRUS guided prostate biopsy. However, a meaningful segmentation of 3D TRUS images tends to suffer from US speckles, shadowing and missing edges etc, which make it a challenging task to delineate the correct prostate boundaries. In this paper, we propose a novel convex optimization based approach to extracting the prostate surface from the given 3D TRUS image, while preserving a new global volume-size prior. We, especially, study the proposed combinatorial optimization problem by convex relaxation and introduce its dual continuous max-flow formulation with the new bounded flow conservation constraint, which results in an efficient numerical solver implemented on GPUs. Experimental results using 12 patient 3D TRUS images show that the proposed approach while preserving the volume-size prior yielded a mean DSC of 89.5% +/- 2.4%, a MAD of 1.4 +/- 0.6 mm, a MAXD of 5.2 +/- 3.2 mm, and a VD of 7.5% +/- 6.2% in - 1 minute, deomonstrating the advantages of both accuracy and efficiency. In addition, the low standard deviation of the segmentation accuracy shows a good reliability of the proposed approach.

  3. Ultra-short beam expander with segmented curvature control: the emergence of a semi-lens

    DOE PAGES

    Abbaslou, Siamak; Gatdula, Robert; Lu, Ming; ...

    2017-01-01

    We introduce direct curvature control in designing a segmented beam expander, and explore novel design possibilities for ultra-compact beam expanders. Assisted by the particle swarm optimization algorithm, we search for an optimal curvature-controlled multi-segment taper that maintains width continuity. Counterintuitively, the optimization yields a structure with abrupt width discontinuity and width compression features. Through spatial phase and parameterized analysis, a semi-lens feature is revealed that helps to flatten the wavefront at the output end for higher coupling efficiency. Such functionality cannot be achieved by normal tapers in a short distance. The structure is fabricated and characterized experimentally. By a figuremore » of merit that accounts for expansion ratio, length, and efficiency, this structure outperforms an adiabatic taper by 9 times.« less

  4. An interactive medical image segmentation framework using iterative refinement.

    PubMed

    Kalshetti, Pratik; Bundele, Manas; Rahangdale, Parag; Jangra, Dinesh; Chattopadhyay, Chiranjoy; Harit, Gaurav; Elhence, Abhay

    2017-04-01

    Segmentation is often performed on medical images for identifying diseases in clinical evaluation. Hence it has become one of the major research areas. Conventional image segmentation techniques are unable to provide satisfactory segmentation results for medical images as they contain irregularities. They need to be pre-processed before segmentation. In order to obtain the most suitable method for medical image segmentation, we propose MIST (Medical Image Segmentation Tool), a two stage algorithm. The first stage automatically generates a binary marker image of the region of interest using mathematical morphology. This marker serves as the mask image for the second stage which uses GrabCut to yield an efficient segmented result. The obtained result can be further refined by user interaction, which can be done using the proposed Graphical User Interface (GUI). Experimental results show that the proposed method is accurate and provides satisfactory segmentation results with minimum user interaction on medical as well as natural images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. A fully automatic three-step liver segmentation method on LDA-based probability maps for multiple contrast MR images.

    PubMed

    Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf

    2010-07-01

    Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties. Copyright 2010 Elsevier Inc. All rights reserved.

  6. Efficiency of geometric designs of flexible solar panels: mathematical simulation

    NASA Astrophysics Data System (ADS)

    Marciniak, Malgorzata; Hassebo, Yasser; Enriquez-Torres, Delfino; Serey-Roman, Maria Ignacia

    2017-09-01

    The purpose of this study is to analyze various surfaces of flexible solar panels and compare them to the traditional at panels mathematically. We evaluated the efficiency based on the integral formulas that involve flux. We performed calculations for flat panels with different positions, a cylindrical panel, conical panels with various opening angles and segments of a spherical panel. Our results indicate that the best efficiency per unit area belongs to particular segments of spherically-shaped panels. In addition, we calculated the optimal opening angle of a cone-shaped panel that maximizes the annual accumulation of the sun radiation per unit area. The considered shapes are presented below with a suggestion for connections of the cells.

  7. Retinal photoreceptors and visual pigments in Boa constrictor imperator.

    PubMed

    Sillman, A J; Johnson, J L; Loew, E R

    2001-09-01

    The photoreceptors of Boa constrictor, a boid snake of the subfamily Boinae, were examined with scanning electron microscopy and microspectrophotometry. The retina of B. constrictor is duplex but highly dominated by rods, cones comprising 11% of the photoreceptor population. The rather tightly packed rods have relatively long outer segments with proximal ends that are somewhat tapered. There are two morphologically distinct, single cones. The most common cone by far has a large inner segment and a relatively stout outer segment. The second cone, seen only infrequently, has a substantially smaller inner segment and a finer outer segment. The visual pigments of B. constrictor are virtually identical to those of the pythonine boid, Python regius. Three different visual pigments are present, all based on vitamin A(1.) The visual pigment of the rods has a wavelength of peak absorbance (lambda(max)) at 495 +/- 2 nm. The visual pigment of the more common, large cone has a lambda(max) at 549 +/- 1 nm. The small, rare cone contains a visual pigment with lambda(max) at 357 +/- 2 nm, providing the snake with sensitivity in the ultraviolet. We suggest that B. constrictor might employ UV sensitivity to locate conspecifics and/or to improve hunting efficiency. The data indicate that wavelength discrimination above 430 nm would not be possible without some input from the rods. Copyright 2001 Wiley-Liss, Inc.

  8. FISH Finder: a high-throughput tool for analyzing FISH images

    PubMed Central

    Shirley, James W.; Ty, Sereyvathana; Takebayashi, Shin-ichiro; Liu, Xiuwen; Gilbert, David M.

    2011-01-01

    Motivation: Fluorescence in situ hybridization (FISH) is used to study the organization and the positioning of specific DNA sequences within the cell nucleus. Analyzing the data from FISH images is a tedious process that invokes an element of subjectivity. Automated FISH image analysis offers savings in time as well as gaining the benefit of objective data analysis. While several FISH image analysis software tools have been developed, they often use a threshold-based segmentation algorithm for nucleus segmentation. As fluorescence signal intensities can vary significantly from experiment to experiment, from cell to cell, and within a cell, threshold-based segmentation is inflexible and often insufficient for automatic image analysis, leading to additional manual segmentation and potential subjective bias. To overcome these problems, we developed a graphical software tool called FISH Finder to automatically analyze FISH images that vary significantly. By posing the nucleus segmentation as a classification problem, compound Bayesian classifier is employed so that contextual information is utilized, resulting in reliable classification and boundary extraction. This makes it possible to analyze FISH images efficiently and objectively without adjustment of input parameters. Additionally, FISH Finder was designed to analyze the distances between differentially stained FISH probes. Availability: FISH Finder is a standalone MATLAB application and platform independent software. The program is freely available from: http://code.google.com/p/fishfinder/downloads/list Contact: gilbert@bio.fsu.edu PMID:21310746

  9. Precise segmentation of multiple organs in CT volumes using learning-based approach and information theory.

    PubMed

    Lu, Chao; Zheng, Yefeng; Birkbeck, Neil; Zhang, Jingdan; Kohlberger, Timo; Tietjen, Christian; Boettger, Thomas; Duncan, James S; Zhou, S Kevin

    2012-01-01

    In this paper, we present a novel method by incorporating information theory into the learning-based approach for automatic and accurate pelvic organ segmentation (including the prostate, bladder and rectum). We target 3D CT volumes that are generated using different scanning protocols (e.g., contrast and non-contrast, with and without implant in the prostate, various resolution and position), and the volumes come from largely diverse sources (e.g., diseased in different organs). Three key ingredients are combined to solve this challenging segmentation problem. First, marginal space learning (MSL) is applied to efficiently and effectively localize the multiple organs in the largely diverse CT volumes. Second, learning techniques, steerable features, are applied for robust boundary detection. This enables handling of highly heterogeneous texture pattern. Third, a novel information theoretic scheme is incorporated into the boundary inference process. The incorporation of the Jensen-Shannon divergence further drives the mesh to the best fit of the image, thus improves the segmentation performance. The proposed approach is tested on a challenging dataset containing 188 volumes from diverse sources. Our approach not only produces excellent segmentation accuracy, but also runs about eighty times faster than previous state-of-the-art solutions. The proposed method can be applied to CT images to provide visual guidance to physicians during the computer-aided diagnosis, treatment planning and image-guided radiotherapy to treat cancers in pelvic region.

  10. An Automatic Method for Geometric Segmentation of Masonry Arch Bridges for Structural Engineering Purposes

    NASA Astrophysics Data System (ADS)

    Riveiro, B.; DeJong, M.; Conde, B.

    2016-06-01

    Despite the tremendous advantages of the laser scanning technology for the geometric characterization of built constructions, there are important limitations preventing more widespread implementation in the structural engineering domain. Even though the technology provides extensive and accurate information to perform structural assessment and health monitoring, many people are resistant to the technology due to the processing times involved. Thus, new methods that can automatically process LiDAR data and subsequently provide an automatic and organized interpretation are required. This paper presents a new method for fully automated point cloud segmentation of masonry arch bridges. The method efficiently creates segmented, spatially related and organized point clouds, which each contain the relevant geometric data for a particular component (pier, arch, spandrel wall, etc.) of the structure. The segmentation procedure comprises a heuristic approach for the separation of different vertical walls, and later image processing tools adapted to voxel structures allows the efficient segmentation of the main structural elements of the bridge. The proposed methodology provides the essential processed data required for structural assessment of masonry arch bridges based on geometric anomalies. The method is validated using a representative sample of masonry arch bridges in Spain.

  11. Rapid Maturation of Edge Sensor Technology and Potential Application in Large Space Telescopes with Segmented Primary Mirrors

    NASA Technical Reports Server (NTRS)

    Montgomery, Edward E., IV; Smith, W. Scott (Technical Monitor)

    2002-01-01

    This paper explores the history and results of the last two year's efforts to transition inductive edge sensor technology from Technology Readiness Level 2 to Technology Readiness Level 6. Both technical and programmatic challenges were overcome in the design, fabrication, test, and installation of over a thousand sensors making up the Segment Alignment Maintenance System (SAMs) for the 91 segment, 9.2-meter. Hobby Eberly Telescope (HET). The integration of these sensors with the control system will be discussed along with serendipitous leverage they provided for both initialization alignment and operational maintenance. The experience gained important insights into the fundamental motion mechanics of large segmented mirrors, the relative importance of the variance sources of misalignment errors, the efficient conduct of a program to mature the technology to the higher levels. Unanticipated factors required the team to develop new implementation strategies for the edge sensor information which enabled major segmented mirror controller design simplifications. The resulting increase in the science efficiency of HET will be shown. Finally, the on-going effort to complete the maturation of inductive edge sensor by delivering space qualified versions for future IR (infrared radiation) space telescopes.

  12. Random forest classification of large volume structures for visuo-haptic rendering in CT images

    NASA Astrophysics Data System (ADS)

    Mastmeyer, Andre; Fortmeier, Dirk; Handels, Heinz

    2016-03-01

    For patient-specific voxel-based visuo-haptic rendering of CT scans of the liver area, the fully automatic segmentation of large volume structures such as skin, soft tissue, lungs and intestine (risk structures) is important. Using a machine learning based approach, several existing segmentations from 10 segmented gold-standard patients are learned by random decision forests individually and collectively. The core of this paper is feature selection and the application of the learned classifiers to a new patient data set. In a leave-some-out cross-validation, the obtained full volume segmentations are compared to the gold-standard segmentations of the untrained patients. The proposed classifiers use a multi-dimensional feature space to estimate the hidden truth, instead of relying on clinical standard threshold and connectivity based methods. The result of our efficient whole-body section classification are multi-label maps with the considered tissues. For visuo-haptic simulation, other small volume structures would have to be segmented additionally. We also take a look into these structures (liver vessels). For an experimental leave-some-out study consisting of 10 patients, the proposed method performs much more efficiently compared to state of the art methods. In two variants of leave-some-out experiments we obtain best mean DICE ratios of 0.79, 0.97, 0.63 and 0.83 for skin, soft tissue, hard bone and risk structures. Liver structures are segmented with DICE 0.93 for the liver, 0.43 for blood vessels and 0.39 for bile vessels.

  13. Automation-assisted cervical cancer screening in manual liquid-based cytology with hematoxylin and eosin staining.

    PubMed

    Zhang, Ling; Kong, Hui; Ting Chin, Chien; Liu, Shaoxiong; Fan, Xinmin; Wang, Tianfu; Chen, Siping

    2014-03-01

    Current automation-assisted technologies for screening cervical cancer mainly rely on automated liquid-based cytology slides with proprietary stain. This is not a cost-efficient approach to be utilized in developing countries. In this article, we propose the first automation-assisted system to screen cervical cancer in manual liquid-based cytology (MLBC) slides with hematoxylin and eosin (H&E) stain, which is inexpensive and more applicable in developing countries. This system consists of three main modules: image acquisition, cell segmentation, and cell classification. First, an autofocusing scheme is proposed to find the global maximum of the focus curve by iteratively comparing image qualities of specific locations. On the autofocused images, the multiway graph cut (GC) is performed globally on the a* channel enhanced image to obtain cytoplasm segmentation. The nuclei, especially abnormal nuclei, are robustly segmented by using GC adaptively and locally. Two concave-based approaches are integrated to split the touching nuclei. To classify the segmented cells, features are selected and preprocessed to improve the sensitivity, and contextual and cytoplasm information are introduced to improve the specificity. Experiments on 26 consecutive image stacks demonstrated that the dynamic autofocusing accuracy was 2.06 μm. On 21 cervical cell images with nonideal imaging condition and pathology, our segmentation method achieved a 93% accuracy for cytoplasm, and a 87.3% F-measure for nuclei, both outperformed state of the art works in terms of accuracy. Additional clinical trials showed that both the sensitivity (88.1%) and the specificity (100%) of our system are satisfyingly high. These results proved the feasibility of automation-assisted cervical cancer screening in MLBC slides with H&E stain, which is highly desirable in community health centers and small hospitals. © 2013 International Society for Advancement of Cytometry.

  14. Segmentation, Splitting, and Classification of Overlapping Bacteria in Microscope Images for Automatic Bacterial Vaginosis Diagnosis.

    PubMed

    Song, Youyi; He, Liang; Zhou, Feng; Chen, Siping; Ni, Dong; Lei, Baiying; Wang, Tianfu

    2017-07-01

    Quantitative analysis of bacterial morphotypes in the microscope images plays a vital role in diagnosis of bacterial vaginosis (BV) based on the Nugent score criterion. However, there are two main challenges for this task: 1) It is quite difficult to identify the bacterial regions due to various appearance, faint boundaries, heterogeneous shapes, low contrast with the background, and small bacteria sizes with regards to the image. 2) There are numerous bacteria overlapping each other, which hinder us to conduct accurate analysis on individual bacterium. To overcome these challenges, we propose an automatic method in this paper to diagnose BV by quantitative analysis of bacterial morphotypes, which consists of a three-step approach, i.e., bacteria regions segmentation, overlapping bacteria splitting, and bacterial morphotypes classification. Specifically, we first segment the bacteria regions via saliency cut, which simultaneously evaluates the global contrast and spatial weighted coherence. And then Markov random field model is applied for high-quality unsupervised segmentation of small object. We then decompose overlapping bacteria clumps into markers, and associate a pixel with markers to identify evidence for eventual individual bacterium splitting. Next, we extract morphotype features from each bacterium to learn the descriptors and to characterize the types of bacteria using an Adaptive Boosting machine learning framework. Finally, BV diagnosis is implemented based on the Nugent score criterion. Experiments demonstrate that our proposed method achieves high accuracy and efficiency in computation for BV diagnosis.

  15. Quantification of regional fat volume in rat MRI

    NASA Astrophysics Data System (ADS)

    Sacha, Jaroslaw P.; Cockman, Michael D.; Dufresne, Thomas E.; Trokhan, Darren

    2003-05-01

    Multiple initiatives in the pharmaceutical and beauty care industries are directed at identifying therapies for weight management. Body composition measurements are critical for such initiatives. Imaging technologies that can be used to measure body composition noninvasively include DXA (dual energy x-ray absorptiometry) and MRI (magnetic resonance imaging). Unlike other approaches, MRI provides the ability to perform localized measurements of fat distribution. Several factors complicate the automatic delineation of fat regions and quantification of fat volumes. These include motion artifacts, field non-uniformity, brightness and contrast variations, chemical shift misregistration, and ambiguity in delineating anatomical structures. We have developed an approach to deal practically with those challenges. The approach is implemented in a package, the Fat Volume Tool, for automatic detection of fat tissue in MR images of the rat abdomen, including automatic discrimination between abdominal and subcutaneous regions. We suppress motion artifacts using masking based on detection of implicit landmarks in the images. Adaptive object extraction is used to compensate for intensity variations. This approach enables us to perform fat tissue detection and quantification in a fully automated manner. The package can also operate in manual mode, which can be used for verification of the automatic analysis or for performing supervised segmentation. In supervised segmentation, the operator has the ability to interact with the automatic segmentation procedures to touch-up or completely overwrite intermediate segmentation steps. The operator's interventions steer the automatic segmentation steps that follow. This improves the efficiency and quality of the final segmentation. Semi-automatic segmentation tools (interactive region growing, live-wire, etc.) improve both the accuracy and throughput of the operator when working in manual mode. The quality of automatic segmentation has been evaluated by comparing the results of fully automated analysis to manual analysis of the same images. The comparison shows a high degree of correlation that validates the quality of the automatic segmentation approach.

  16. A web-based procedure for liver segmentation in CT images

    NASA Astrophysics Data System (ADS)

    Yuan, Rong; Luo, Ming; Wang, Luyao; Xie, Qingguo

    2015-03-01

    Liver segmentation in CT images has been acknowledged as a basic and indispensable part in systems of computer aided liver surgery for operation design and risk evaluation. In this paper, we will introduce and implement a web-based procedure for liver segmentation to help radiologists and surgeons get an accurate result efficiently and expediently. Several clinical datasets are used to evaluate the accessibility and the accuracy. This procedure seems a promising approach for extraction of liver volumetry of various shapes. Moreover, it is possible for user to access the segmentation wherever the Internet is available without any specific machine.

  17. Development of a passive micromixer based on repeated fluid twisting and flattening, and its application to DNA purification.

    PubMed

    Lee, Nae Yoon; Yamada, Masumi; Seki, Minoru

    2005-11-01

    We have developed a three-dimensional passive micromixer based on new mixing principles, fluid twisting and flattening. This micromixer is constructed by repeating two microchannel segments, a "main channel" and a "flattened channel", which are very different in size and are arranged perpendicularly. At the intersection of these segments the fluid inside the micromixer is twisted and then, in the flattened channel, the diffusion length is greatly reduced, achieving high mixing efficiency. Several types of micromixer were fabricated and the effect of microchannel geometry on mixing performance was evaluated. We also integrated this micromixer with a miniaturized DNA purification device, in which the concentration of the buffer solution could be rapidly changed, to perform DNA purification based on solid-phase extraction.

  18. A model to identify high crash road segments with the dynamic segmentation method.

    PubMed

    Boroujerdian, Amin Mirza; Saffarzadeh, Mahmoud; Yousefi, Hassan; Ghassemian, Hassan

    2014-12-01

    Currently, high social and economic costs in addition to physical and mental consequences put road safety among most important issues. This paper aims at presenting a novel approach, capable of identifying the location as well as the length of high crash road segments. It focuses on the location of accidents occurred along the road and their effective regions. In other words, due to applicability and budget limitations in improving safety of road segments, it is not possible to recognize all high crash road segments. Therefore, it is of utmost importance to identify high crash road segments and their real length to be able to prioritize the safety improvement in roads. In this paper, after evaluating deficiencies of the current road segmentation models, different kinds of errors caused by these methods are addressed. One of the main deficiencies of these models is that they can not identify the length of high crash road segments. In this paper, identifying the length of high crash road segments (corresponding to the arrangement of accidents along the road) is achieved by converting accident data to the road response signal of through traffic with a dynamic model based on the wavelet theory. The significant advantage of the presented method is multi-scale segmentation. In other words, this model identifies high crash road segments with different lengths and also it can recognize small segments within long segments. Applying the presented model into a real case for identifying 10-20 percent of high crash road segment showed an improvement of 25-38 percent in relative to the existing methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, T; Ruan, D

    2015-06-15

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is firstmore » roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit in both complexity and performance is expected to be most pronounced with large-scale heterogeneous data.« less

  20. Missing observations in multiyear rotation sampling designs

    NASA Technical Reports Server (NTRS)

    Gbur, E. E.; Sielken, R. L., Jr. (Principal Investigator)

    1982-01-01

    Because Multiyear estimation of at-harvest stratum crop proportions is more efficient than single year estimation, the behavior of multiyear estimators in the presence of missing acquisitions was studied. Only the (worst) case when a segment proportion cannot be estimated for the entire year is considered. The effect of these missing segments on the variance of the at-harvest stratum crop proportion estimator is considered when missing segments are not replaced, and when missing segments are replaced by segments not sampled in previous years. The principle recommendations are to replace missing segments according to some specified strategy, and to use a sequential procedure for selecting a sampling design; i.e., choose an optimal two year design and then, based on the observed two year design after segment losses have been taken into account, choose the best possible three year design having the observed two year parent design.

  1. Joint segmentation of lumen and outer wall from femoral artery MR images: Towards 3D imaging measurements of peripheral arterial disease.

    PubMed

    Ukwatta, Eranga; Yuan, Jing; Qiu, Wu; Rajchl, Martin; Chiu, Bernard; Fenster, Aaron

    2015-12-01

    Three-dimensional (3D) measurements of peripheral arterial disease (PAD) plaque burden extracted from fast black-blood magnetic resonance (MR) images have shown to be more predictive of clinical outcomes than PAD stenosis measurements. To this end, accurate segmentation of the femoral artery lumen and outer wall is required for generating volumetric measurements of PAD plaque burden. Here, we propose a semi-automated algorithm to jointly segment the femoral artery lumen and outer wall surfaces from 3D black-blood MR images, which are reoriented and reconstructed along the medial axis of the femoral artery to obtain improved spatial coherence between slices of the long, thin femoral artery and to reduce computation time. The developed segmentation algorithm enforces two priors in a global optimization manner: the spatial consistency between the adjacent 2D slices and the anatomical region order between the femoral artery lumen and outer wall surfaces. The formulated combinatorial optimization problem for segmentation is solved globally and exactly by means of convex relaxation using a coupled continuous max-flow (CCMF) model, which is a dual formulation to the convex relaxed optimization problem. In addition, the CCMF model directly derives an efficient duality-based algorithm based on the modern multiplier augmented optimization scheme, which has been implemented on a GPU for fast computation. The computed segmentations from the developed algorithm were compared to manual delineations from experts using 20 black-blood MR images. The developed algorithm yielded both high accuracy (Dice similarity coefficients ≥ 87% for both the lumen and outer wall surfaces) and high reproducibility (intra-class correlation coefficient of 0.95 for generating vessel wall area), while outperforming the state-of-the-art method in terms of computational time by a factor of ≈ 20. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Tailored Educational Approaches for Consumer Health: A Model to Address Health Promotion in an Era of Personalized Medicine.

    PubMed

    Cohn, Wendy F; Lyman, Jason; Broshek, Donna K; Guterbock, Thomas M; Hartman, David; Kinzie, Mable; Mick, David; Pannone, Aaron; Sturz, Vanessa; Schubart, Jane; Garson, Arthur T

    2018-01-01

    To develop a model, based on market segmentation, to improve the quality and efficiency of health promotion materials and programs. Market segmentation to create segments (groups) based on a cross-sectional questionnaire measuring individual characteristics and preferences for health information. Educational and delivery recommendations developed for each group. General population of adults in Virginia. Random sample of 1201 Virginia residents. Respondents are representative of the general population with the exception of older age. Multiple factors known to impact health promotion including health status, health system utilization, health literacy, Internet use, learning styles, and preferences. Cluster analysis and discriminate analysis to create and validate segments. Common sized means to compare factors across segments. Developed educational and delivery recommendations matched to the 8 distinct segments. For example, the "health challenged and hard to reach" are older, lower literacy, and not likely to seek out health information. Their educational and delivery recommendations include a sixth-grade reading level, delivery through a provider, and using a "push" strategy. This model addresses a need to improve the efficiency and quality of health promotion efforts in an era of personalized medicine. It demonstrates that there are distinct groups with clearly defined educational and delivery recommendations. Health promotion professionals can consider Tailored Educational Approaches for Consumer Health to develop and deliver tailored materials to encourage behavior change.

  3. Position-dependent effects of polylysine on Sec protein transport.

    PubMed

    Liang, Fu-Cheng; Bageshwar, Umesh K; Musser, Siegfried M

    2012-04-13

    The bacterial Sec protein translocation system catalyzes the transport of unfolded precursor proteins across the cytoplasmic membrane. Using a recently developed real time fluorescence-based transport assay, the effects of the number and distribution of positive charges on the transport time and transport efficiency of proOmpA were examined. As expected, an increase in the number of lysine residues generally increased transport time and decreased transport efficiency. However, the observed effects were highly dependent on the polylysine position in the mature domain. In addition, a string of consecutive positive charges generally had a more significant effect on transport time and efficiency than separating the charges into two or more charged segments. Thirty positive charges distributed throughout the mature domain resulted in effects similar to 10 consecutive charges near the N terminus of the mature domain. These data support a model in which the local effects of positive charge on the translocation kinetics dominate over total thermodynamic constraints. The rapid translocation kinetics of some highly charged proOmpA mutants suggest that the charge is partially shielded from the electric field gradient during transport, possibly by the co-migration of counter ions. The transport times of precursors with multiple positively charged sequences, or "pause sites," were fairly well predicted by a local effect model. However, the kinetic profile predicted by this local effect model was not observed. Instead, the transport kinetics observed for precursors with multiple polylysine segments support a model in which translocation through the SecYEG pore is not the rate-limiting step of transport.

  4. Position-dependent Effects of Polylysine on Sec Protein Transport*

    PubMed Central

    Liang, Fu-Cheng; Bageshwar, Umesh K.; Musser, Siegfried M.

    2012-01-01

    The bacterial Sec protein translocation system catalyzes the transport of unfolded precursor proteins across the cytoplasmic membrane. Using a recently developed real time fluorescence-based transport assay, the effects of the number and distribution of positive charges on the transport time and transport efficiency of proOmpA were examined. As expected, an increase in the number of lysine residues generally increased transport time and decreased transport efficiency. However, the observed effects were highly dependent on the polylysine position in the mature domain. In addition, a string of consecutive positive charges generally had a more significant effect on transport time and efficiency than separating the charges into two or more charged segments. Thirty positive charges distributed throughout the mature domain resulted in effects similar to 10 consecutive charges near the N terminus of the mature domain. These data support a model in which the local effects of positive charge on the translocation kinetics dominate over total thermodynamic constraints. The rapid translocation kinetics of some highly charged proOmpA mutants suggest that the charge is partially shielded from the electric field gradient during transport, possibly by the co-migration of counter ions. The transport times of precursors with multiple positively charged sequences, or “pause sites,” were fairly well predicted by a local effect model. However, the kinetic profile predicted by this local effect model was not observed. Instead, the transport kinetics observed for precursors with multiple polylysine segments support a model in which translocation through the SecYEG pore is not the rate-limiting step of transport. PMID:22367204

  5. Comparative Performance Analysis of Intel Xeon Phi, GPU, and CPU: A Case Study from Microscopy Image Analysis

    PubMed Central

    Teodoro, George; Kurc, Tahsin; Kong, Jun; Cooper, Lee; Saltz, Joel

    2014-01-01

    We study and characterize the performance of operations in an important class of applications on GPUs and Many Integrated Core (MIC) architectures. Our work is motivated by applications that analyze low-dimensional spatial datasets captured by high resolution sensors, such as image datasets obtained from whole slide tissue specimens using microscopy scanners. Common operations in these applications involve the detection and extraction of objects (object segmentation), the computation of features of each extracted object (feature computation), and characterization of objects based on these features (object classification). In this work, we have identify the data access and computation patterns of operations in the object segmentation and feature computation categories. We systematically implement and evaluate the performance of these operations on modern CPUs, GPUs, and MIC systems for a microscopy image analysis application. Our results show that the performance on a MIC of operations that perform regular data access is comparable or sometimes better than that on a GPU. On the other hand, GPUs are significantly more efficient than MICs for operations that access data irregularly. This is a result of the low performance of MICs when it comes to random data access. We also have examined the coordinated use of MICs and CPUs. Our experiments show that using a performance aware task strategy for scheduling application operations improves performance about 1.29× over a first-come-first-served strategy. This allows applications to obtain high performance efficiency on CPU-MIC systems - the example application attained an efficiency of 84% on 192 nodes (3072 CPU cores and 192 MICs). PMID:25419088

  6. Accuracy and efficiency of computer-aided anatomical analysis using 3D visualization software based on semi-automated and automated segmentations.

    PubMed

    An, Gao; Hong, Li; Zhou, Xiao-Bing; Yang, Qiong; Li, Mei-Qing; Tang, Xiang-Yang

    2017-03-01

    We investigated and compared the functionality of two 3D visualization software provided by a CT vendor and a third-party vendor, respectively. Using surgical anatomical measurement as baseline, we evaluated the accuracy of 3D visualization and verified their utility in computer-aided anatomical analysis. The study cohort consisted of 50 adult cadavers fixed with the classical formaldehyde method. The computer-aided anatomical analysis was based on CT images (in DICOM format) acquired by helical scan with contrast enhancement, using a CT vendor provided 3D visualization workstation (Syngo) and a third-party 3D visualization software (Mimics) that was installed on a PC. Automated and semi-automated segmentations were utilized in the 3D visualization workstation and software, respectively. The functionality and efficiency of automated and semi-automated segmentation methods were compared. Using surgical anatomical measurement as a baseline, the accuracy of 3D visualization based on automated and semi-automated segmentations was quantitatively compared. In semi-automated segmentation, the Mimics 3D visualization software outperformed the Syngo 3D visualization workstation. No significant difference was observed in anatomical data measurement by the Syngo 3D visualization workstation and the Mimics 3D visualization software (P>0.05). Both the Syngo 3D visualization workstation provided by a CT vendor and the Mimics 3D visualization software by a third-party vendor possessed the needed functionality, efficiency and accuracy for computer-aided anatomical analysis. Copyright © 2016 Elsevier GmbH. All rights reserved.

  7. Current density distributions, field distributions and impedance analysis of segmented deep brain stimulation electrodes

    NASA Astrophysics Data System (ADS)

    Wei, Xuefeng F.; Grill, Warren M.

    2005-12-01

    Deep brain stimulation (DBS) electrodes are designed to stimulate specific areas of the brain. The most widely used DBS electrode has a linear array of 4 cylindrical contacts that can be selectively turned on depending on the placement of the electrode and the specific area of the brain to be stimulated. The efficacy of DBS therapy can be improved by localizing the current delivery into specific populations of neurons and by increasing the power efficiency through a suitable choice of electrode geometrical characteristics. We investigated segmented electrode designs created by sectioning each cylindrical contact into multiple rings. Prototypes of these designs, made with different materials and larger dimensions than those of clinical DBS electrodes, were evaluated in vitro and in simulation. A finite element model was developed to study the effects of varying the electrode characteristics on the current density and field distributions in an idealized electrolytic medium and in vitro experiments were conducted to measure the electrode impedance. The current density over the electrode surface increased towards the edges of the electrode, and multiple edges increased the non-uniformity of the current density profile. The edge effects were more pronounced over the end segments than over the central segments. Segmented electrodes generated larger magnitudes of the second spatial difference of the extracellular potentials, and thus required lower stimulation intensities to achieve the same level of neuronal activation as solid electrodes. For a fixed electrode conductive area, increasing the number of segments (edges) decreased the impedance compared to a single solid electrode, because the average current density over the segments increased. Edge effects played a critical role in determining the current density distributions, neuronal excitation patterns, and impedance of cylindrical electrodes, and segmented electrodes provide a means to increase the efficiency of DBS.

  8. Towards Automatic Image Segmentation Using Optimised Region Growing Technique

    NASA Astrophysics Data System (ADS)

    Alazab, Mamoun; Islam, Mofakharul; Venkatraman, Sitalakshmi

    Image analysis is being adopted extensively in many applications such as digital forensics, medical treatment, industrial inspection, etc. primarily for diagnostic purposes. Hence, there is a growing interest among researches in developing new segmentation techniques to aid the diagnosis process. Manual segmentation of images is labour intensive, extremely time consuming and prone to human errors and hence an automated real-time technique is warranted in such applications. There is no universally applicable automated segmentation technique that will work for all images as the image segmentation is quite complex and unique depending upon the domain application. Hence, to fill the gap, this paper presents an efficient segmentation algorithm that can segment a digital image of interest into a more meaningful arrangement of regions and objects. Our algorithm combines region growing approach with optimised elimination of false boundaries to arrive at more meaningful segments automatically. We demonstrate this using X-ray teeth images that were taken for real-life dental diagnosis.

  9. Multi-tissue and multi-scale approach for nuclei segmentation in H&E stained images.

    PubMed

    Salvi, Massimo; Molinari, Filippo

    2018-06-20

    Accurate nuclei detection and segmentation in histological images is essential for many clinical purposes. While manual annotations are time-consuming and operator-dependent, full automated segmentation remains a challenging task due to the high variability of cells intensity, size and morphology. Most of the proposed algorithms for the automated segmentation of nuclei were designed for specific organ or tissues. The aim of this study was to develop and validate a fully multiscale method, named MANA (Multiscale Adaptive Nuclei Analysis), for nuclei segmentation in different tissues and magnifications. MANA was tested on a dataset of H&E stained tissue images with more than 59,000 annotated nuclei, taken from six organs (colon, liver, bone, prostate, adrenal gland and thyroid) and three magnifications (10×, 20×, 40×). Automatic results were compared with manual segmentations and three open-source software designed for nuclei detection. For each organ, MANA obtained always an F1-score higher than 0.91, with an average F1 of 0.9305 ± 0.0161. The average computational time was about 20 s independently of the number of nuclei to be detected (anyway, higher than 1000), indicating the efficiency of the proposed technique. To the best of our knowledge, MANA is the first fully automated multi-scale and multi-tissue algorithm for nuclei detection. Overall, the robustness and versatility of MANA allowed to achieve, on different organs and magnifications, performances in line or better than those of state-of-art algorithms optimized for single tissues.

  10. Automatic 3D liver location and segmentation via convolutional neural network and graph cut.

    PubMed

    Lu, Fang; Wu, Fa; Hu, Peijun; Peng, Zhiyi; Kong, Dexing

    2017-02-01

    Segmentation of the liver from abdominal computed tomography (CT) images is an essential step in some computer-assisted clinical interventions, such as surgery planning for living donor liver transplant, radiotherapy and volume measurement. In this work, we develop a deep learning algorithm with graph cut refinement to automatically segment the liver in CT scans. The proposed method consists of two main steps: (i) simultaneously liver detection and probabilistic segmentation using 3D convolutional neural network; (ii) accuracy refinement of the initial segmentation with graph cut and the previously learned probability map. The proposed approach was validated on forty CT volumes taken from two public databases MICCAI-Sliver07 and 3Dircadb1. For the MICCAI-Sliver07 test dataset, the calculated mean ratios of volumetric overlap error (VOE), relative volume difference (RVD), average symmetric surface distance (ASD), root-mean-square symmetric surface distance (RMSD) and maximum symmetric surface distance (MSD) are 5.9, 2.7 %, 0.91, 1.88 and 18.94 mm, respectively. For the 3Dircadb1 dataset, the calculated mean ratios of VOE, RVD, ASD, RMSD and MSD are 9.36, 0.97 %, 1.89, 4.15 and 33.14 mm, respectively. The proposed method is fully automatic without any user interaction. Quantitative results reveal that the proposed approach is efficient and accurate for hepatic volume estimation in a clinical setup. The high correlation between the automatic and manual references shows that the proposed method can be good enough to replace the time-consuming and nonreproducible manual segmentation method.

  11. PaCeQuant: A Tool for High-Throughput Quantification of Pavement Cell Shape Characteristics.

    PubMed

    Möller, Birgit; Poeschl, Yvonne; Plötner, Romina; Bürstenbinder, Katharina

    2017-11-01

    Pavement cells (PCs) are the most frequently occurring cell type in the leaf epidermis and play important roles in leaf growth and function. In many plant species, PCs form highly complex jigsaw-puzzle-shaped cells with interlocking lobes. Understanding of their development is of high interest for plant science research because of their importance for leaf growth and hence for plant fitness and crop yield. Studies of PC development, however, are limited, because robust methods are lacking that enable automatic segmentation and quantification of PC shape parameters suitable to reflect their cellular complexity. Here, we present our new ImageJ-based tool, PaCeQuant, which provides a fully automatic image analysis workflow for PC shape quantification. PaCeQuant automatically detects cell boundaries of PCs from confocal input images and enables manual correction of automatic segmentation results or direct import of manually segmented cells. PaCeQuant simultaneously extracts 27 shape features that include global, contour-based, skeleton-based, and PC-specific object descriptors. In addition, we included a method for classification and analysis of lobes at two-cell junctions and three-cell junctions, respectively. We provide an R script for graphical visualization and statistical analysis. We validated PaCeQuant by extensive comparative analysis to manual segmentation and existing quantification tools and demonstrated its usability to analyze PC shape characteristics during development and between different genotypes. PaCeQuant thus provides a platform for robust, efficient, and reproducible quantitative analysis of PC shape characteristics that can easily be applied to study PC development in large data sets. © 2017 American Society of Plant Biologists. All Rights Reserved.

  12. Factorization-based texture segmentation

    DOE PAGES

    Yuan, Jiangye; Wang, Deliang; Cheriyadat, Anil M.

    2015-06-17

    This study introduces a factorization-based approach that efficiently segments textured images. We use local spectral histograms as features, and construct an M × N feature matrix using M-dimensional feature vectors in an N-pixel image. Based on the observation that each feature can be approximated by a linear combination of several representative features, we factor the feature matrix into two matrices-one consisting of the representative features and the other containing the weights of representative features at each pixel used for linear combination. The factorization method is based on singular value decomposition and nonnegative matrix factorization. The method uses local spectral histogramsmore » to discriminate region appearances in a computationally efficient way and at the same time accurately localizes region boundaries. Finally, the experiments conducted on public segmentation data sets show the promise of this simple yet powerful approach.« less

  13. Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction.

    PubMed

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2018-06-01

    To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.

  14. Use of boron cluster-containing redox nanoparticles with ROS scavenging ability in boron neutron capture therapy to achieve high therapeutic efficiency and low adverse effects.

    PubMed

    Gao, Zhenyu; Horiguchi, Yukichi; Nakai, Kei; Matsumura, Akira; Suzuki, Minoru; Ono, Koji; Nagasaki, Yukio

    2016-10-01

    A boron delivery system with high therapeutic efficiency and low adverse effects is crucial for a successful boron neutron capture therapy (BNCT). In this study, we developed boron cluster-containing redox nanoparticles (BNPs) via polyion complex (PIC) formation, using a newly synthesized poly(ethylene glycol)-polyanion (PEG-polyanion, possessing a (10)B-enriched boron cluster as a side chain of one of its segments) and PEG-polycation (possessing a reactive oxygen species (ROS) scavenger as a side chain of one of its segments). The BNPs exhibited high colloidal stability, selective uptake in tumor cells, specific accumulation, and long retention in tumor tissue and ROS scavenging ability. After thermal neutron irradiation, significant suppression of tumor growth was observed in the BNP-treated group, with only 5-ppm (10)B in tumor tissues, whereas at least 20-ppm (10)B is generally required for low molecular weight (LMW) (10)B agents. In addition, increased leukocyte levels were observed in the LMW (10)B agent-treated group after thermal neutron irradiation, and not in BNP-treated group, which might be attributed to its ROS scavenging ability. No visual metastasis of tumor cells to other organs was observed 1 month after irradiation in the BNP-treated group. These results suggest that BNPs are promising for enhancing the BNCT performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Improved and Robust Detection of Cell Nuclei from Four Dimensional Fluorescence Images

    PubMed Central

    Bashar, Md. Khayrul; Yamagata, Kazuo; Kobayashi, Tetsuya J.

    2014-01-01

    Segmentation-free direct methods are quite efficient for automated nuclei extraction from high dimensional images. A few such methods do exist but most of them do not ensure algorithmic robustness to parameter and noise variations. In this research, we propose a method based on multiscale adaptive filtering for efficient and robust detection of nuclei centroids from four dimensional (4D) fluorescence images. A temporal feedback mechanism is employed between the enhancement and the initial detection steps of a typical direct method. We estimate the minimum and maximum nuclei diameters from the previous frame and feed back them as filter lengths for multiscale enhancement of the current frame. A radial intensity-gradient function is optimized at positions of initial centroids to estimate all nuclei diameters. This procedure continues for processing subsequent images in the sequence. Above mechanism thus ensures proper enhancement by automated estimation of major parameters. This brings robustness and safeguards the system against additive noises and effects from wrong parameters. Later, the method and its single-scale variant are simplified for further reduction of parameters. The proposed method is then extended for nuclei volume segmentation. The same optimization technique is applied to final centroid positions of the enhanced image and the estimated diameters are projected onto the binary candidate regions to segment nuclei volumes.Our method is finally integrated with a simple sequential tracking approach to establish nuclear trajectories in the 4D space. Experimental evaluations with five image-sequences (each having 271 3D sequential images) corresponding to five different mouse embryos show promising performances of our methods in terms of nuclear detection, segmentation, and tracking. A detail analysis with a sub-sequence of 101 3D images from an embryo reveals that the proposed method can improve the nuclei detection accuracy by 9 over the previous methods, which used inappropriate large valued parameters. Results also confirm that the proposed method and its variants achieve high detection accuracies ( 98 mean F-measure) irrespective of the large variations of filter parameters and noise levels. PMID:25020042

  16. SLIC superpixels compared to state-of-the-art superpixel methods.

    PubMed

    Achanta, Radhakrishna; Shaji, Appu; Smith, Kevin; Lucchi, Aurelien; Fua, Pascal; Süsstrunk, Sabine

    2012-11-01

    Computer vision applications have come to rely increasingly on superpixels in recent years, but it is not always clear what constitutes a good superpixel algorithm. In an effort to understand the benefits and drawbacks of existing methods, we empirically compare five state-of-the-art superpixel algorithms for their ability to adhere to image boundaries, speed, memory efficiency, and their impact on segmentation performance. We then introduce a new superpixel algorithm, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels. Despite its simplicity, SLIC adheres to boundaries as well as or better than previous methods. At the same time, it is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation.

  17. Laser micromachining of cadmium tungstate scintillator for high energy X-ray imaging

    NASA Astrophysics Data System (ADS)

    Richards, Sion Andreas

    Pulsed laser ablation has been investigated as a method for the creation of thick segmented scintillator arrays for high-energy X-ray radiography. Thick scintillators are needed to improve the X-ray absorption at high energies, while segmentation is required for spatial resolution. Monte-Carlo simulations predicted that reflections at the inter-segment walls were the greatest source of loss of scintillation photons. As a result of this, fine pitched arrays would be inefficient as the number of reflections would be significantly higher than in large pitch arrays. Nanosecond and femtosecond pulsed laser ablation was investigated as a method to segment cadmium tungstate (CdWO_4). The effect of laser parameters on the ablation mechanisms, laser induced material changes and debris produced were investigated using optical and electron microscopy, energy dispersive X-ray spectroscopy and X-ray photoelectron spectroscopy for both types of lasers. It was determined that nanosecond ablation was unsuitable due to the large amount of cracking and a heat affected zone created during the ablation process. Femtosecond pulsed laser ablation was found to induce less damage. The optimised laser parameters for a 1028 nm laser was found to be a pulse energy of 54 μJ corresponding to a fluence of 5.3 J cm. -2 a pulse duration of 190 fs, a repetition rate of 78.3 kHz and a laser scan speed of 707 mm s. -1 achieving a normalised pulse overlap of 0.8. A serpentine scan pattern was found to minimise damage caused by anisotropic thermal expansion. Femtosecond pulsed ablation was also found to create a layer of tungsten and cadmium sub-oxides on the surface of the crystals. The CdWO_4 could be cleaned by immersing the CdWO_4 in ammonium hydroxide at 45°C for 15 minutes. However, XPS indicated that the ammonium hydroxide formed a thin layer of CdCO_3 and Cd(OH)_2 on the surface. Prototype arrays were shown to be able to resolve features as small as 0.5 mm using keV energy X-rays. The most efficient prototype showed low detective quantum efficiency of 0.08±0.01 at 0 lp/mm using a tube voltage of 160 kVp.

  18. Efficient 3D multi-region prostate MRI segmentation using dual optimization.

    PubMed

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2013-01-01

    Efficient and accurate extraction of the prostate, in particular its clinically meaningful sub-regions from 3D MR images, is of great interest in image-guided prostate interventions and diagnosis of prostate cancer. In this work, we propose a novel multi-region segmentation approach to simultaneously locating the boundaries of the prostate and its two major sub-regions: the central gland and the peripheral zone. The proposed method utilizes the prior knowledge of the spatial region consistency and employs a customized prostate appearance model to simultaneously segment multiple clinically meaningful regions. We solve the resulted challenging combinatorial optimization problem by means of convex relaxation, for which we introduce a novel spatially continuous flow-maximization model and demonstrate its duality to the investigated convex relaxed optimization problem with the region consistency constraint. Moreover, the proposed continuous max-flow model naturally leads to a new and efficient continuous max-flow based algorithm, which enjoys great advantages in numerics and can be readily implemented on GPUs. Experiments using 15 T2-weighted 3D prostate MR images, by inter- and intra-operator variability, demonstrate the promising performance of the proposed approach.

  19. Automated grain extraction and classification by combining improved region growing segmentation and shape descriptors in electromagnetic mill classification system

    NASA Astrophysics Data System (ADS)

    Budzan, Sebastian

    2018-04-01

    In this paper, the automatic method of grain detection and classification has been presented. As input, it uses a single digital image obtained from milling process of the copper ore with an high-quality digital camera. The grinding process is an extremely energy and cost consuming process, thus granularity evaluation process should be performed with high efficiency and time consumption. The method proposed in this paper is based on the three-stage image processing. First, using Seeded Region Growing (SRG) segmentation with proposed adaptive thresholding based on the calculation of Relative Standard Deviation (RSD) all grains are detected. In the next step results of the detection are improved using information about the shape of the detected grains using distance map. Finally, each grain in the sample is classified into one of the predefined granularity class. The quality of the proposed method has been obtained by using nominal granularity samples, also with a comparison to the other methods.

  20. Lens-free all-fiber probe with an optimized output beam for optical coherence tomography.

    PubMed

    Ding, Zhihua; Qiu, Jianrong; Shen, Yi; Chen, Zhiyan; Bao, Wen

    2017-07-15

    A high-efficiency lensless all-fiber probe for optical coherence tomography (OCT) is presented. The probe is composed of a segment of large-core multimode fiber (MMF), a segment of tapered MMF, and a length of single-mode fiber (SMF). A controllable output beam can be designed by a simple adjustment of its probe structure parameters (PSPs), instead of the selection of fibers with different optical parameters. A side-view probe with a diameter of 340 μm and a rigid length of 6.37 mm was fabricated, which provides an effective imaging range of ∼0.6  mm with a full width at half-maximum beam diameter of less than 30 μm. The insertion loss of the probe was measured to be 0.81 dB, ensuring a high sensitivity of 102.25 dB. Satisfactory images were obtained by the probe-based OCT system, demonstrating the feasibility of the probe for endoscopic OCT applications.

  1. Introgression of leaf rust and stripe rust resistance from Sharon goatgrass (Aegilops sharonensis Eig) into bread wheat (Triticum aestivum L.).

    PubMed

    Millet, E; Manisterski, J; Ben-Yehuda, P; Distelfeld, A; Deek, J; Wan, A; Chen, X; Steffenson, B J

    2014-06-01

    Leaf rust and stripe rust are devastating wheat diseases, causing significant yield losses in many regions of the world. The use of resistant varieties is the most efficient way to protect wheat crops from these diseases. Sharon goatgrass (Aegilops sharonensis or AES), which is a diploid wild relative of wheat, exhibits a high frequency of leaf and stripe rust resistance. We used the resistant AES accession TH548 and induced homoeologous recombination by the ph1b allele to obtain resistant wheat recombinant lines carrying AES chromosome segments in the genetic background of the spring wheat cultivar Galil. The gametocidal effect from AES was overcome by using an "anti-gametocidal" wheat mutant. These recombinant lines were found resistant to highly virulent races of the leaf and stripe rust pathogens in Israel and the United States. Molecular DArT analysis of the different recombinant lines revealed different lengths of AES segments on wheat chromosome 6B, which indicates the location of both resistance genes.

  2. Document segmentation for high-quality printing

    NASA Astrophysics Data System (ADS)

    Ancin, Hakan

    1997-04-01

    A technique to segment dark texts on light background of mixed mode color documents is presented. This process does not perceptually change graphics and photo regions. Color documents are scanned and printed from various media which usually do not have clean background. This is especially the case for the printouts generated from thin magazine samples, these printouts usually include text and figures form the back of the page, which is called bleeding. Removal of bleeding artifacts improves the perceptual quality of the printed document and reduces the color ink usage. By detecting the light background of the document, these artifacts are removed from background regions. Also detection of dark text regions enables the halftoning algorithms to use true black ink for the black text pixels instead of composite black. The processed document contains sharp black text on white background, resulting improved perceptual quality and better ink utilization. The described method is memory efficient and requires a small number of scan lines of high resolution color documents during processing.

  3. Reactor coolant pump flywheel

    DOEpatents

    Finegan, John Raymond; Kreke, Francis Joseph; Casamassa, John Joseph

    2013-11-26

    A flywheel for a pump, and in particular a flywheel having a number of high density segments for use in a nuclear reactor coolant pump. The flywheel includes an inner member and an outer member. A number of high density segments are provided between the inner and outer members. The high density segments may be formed from a tungsten based alloy. A preselected gap is provided between each of the number of high density segments. The gap accommodates thermal expansion of each of the number of segments and resists the hoop stress effect/keystoning of the segments.

  4. TernaryNet: faster deep model inference without GPUs for medical 3D segmentation using sparse and binary convolutions.

    PubMed

    Heinrich, Mattias P; Blendowski, Max; Oktay, Ozan

    2018-05-30

    Deep convolutional neural networks (DCNN) are currently ubiquitous in medical imaging. While their versatility and high-quality results for common image analysis tasks including segmentation, localisation and prediction is astonishing, the large representational power comes at the cost of highly demanding computational effort. This limits their practical applications for image-guided interventions and diagnostic (point-of-care) support using mobile devices without graphics processing units (GPU). We propose a new scheme that approximates both trainable weights and neural activations in deep networks by ternary values and tackles the open question of backpropagation when dealing with non-differentiable functions. Our solution enables the removal of the expensive floating-point matrix multiplications throughout any convolutional neural network and replaces them by energy- and time-preserving binary operators and population counts. We evaluate our approach for the segmentation of the pancreas in CT. Here, our ternary approximation within a fully convolutional network leads to more than 90% memory reductions and high accuracy (without any post-processing) with a Dice overlap of 71.0% that comes close to the one obtained when using networks with high-precision weights and activations. We further provide a concept for sub-second inference without GPUs and demonstrate significant improvements in comparison with binary quantisation and without our proposed ternary hyperbolic tangent continuation. We present a key enabling technique for highly efficient DCNN inference without GPUs that will help to bring the advances of deep learning to practical clinical applications. It has also great promise for improving accuracies in large-scale medical data retrieval.

  5. Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor.

    PubMed

    Huang, Lvwen; Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing

    2017-08-23

    Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields.

  6. Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor

    PubMed Central

    Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing

    2017-01-01

    Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields. PMID:28832520

  7. Operating Room of the Future: Advanced Technologies in Safe and Efficient Operating Rooms

    DTIC Science & Technology

    2010-10-01

    research, and treatment purposes. A laser optical mouse and a graphics tablet were used by radiologists to segment 12 simulated reference lesions per...radiologists seg- mented a total of 132 simulated lesions. Overall error in contour segmentation was less with the graphics tablet than with the mouse...PG0.0001). Error in area of segmentation was not significantly different between the tablet and the mouse (P=0.62). Time for segmen- tation was less with

  8. Preparation of uniform-sized PELA microspheres with high encapsulation efficiency of antigen by premix membrane emulsification.

    PubMed

    Wei, Qiang; Wei, Wei; Tian, Rui; Wang, Lian-Yan; Su, Zhi-Guo; Ma, Guang-Hui

    2008-07-15

    Relatively uniform-sized poly(lactide-co-ethylene glycol) (PELA) microspheres with high encapsulation efficiency were prepared rapidly by a novel method combining emulsion-solvent extraction and premix membrane emulsification. Briefly, preparation of coarse double emulsions was followed by additional premix membrane emulsification, and antigen-loaded microspheres were obtained by further solidification. Under the optimum condition, the particle size was about 1 mum and the coefficient of variation (CV) value was 18.9%. Confocal laser scanning microscope and flow cytometer analysis showed that the inner droplets were small and evenly dispersed and the antigen was loaded uniformly in each microsphere when sonication technique was occupied to prepare primary emulsion. Distribution pattern of PEG segment played important role on the properties of microspheres. Compared with triblock copolymer PLA-PEG-PLA, the diblock copolymer PLA-mPEG yielded a more stable interfacial layer at the interface of oil and water phase, and thus was more suitable to stabilize primary emulsion and protect coalescence of inner droplets and external water phase, resulting in high encapsulation efficiency (90.4%). On the other hand, solidification rate determined the time for coalescence during microspheres fabrication, and thus affected encapsulation efficiency. Taken together, improving the polymer properties and solidification rate are considered as two effective strategies to yield high encapsulation.

  9. Use of graph algorithms in the processing and analysis of images with focus on the biomedical data.

    PubMed

    Zdimalova, M; Roznovjak, R; Weismann, P; El Falougy, H; Kubikova, E

    2017-01-01

    Image segmentation is a known problem in the field of image processing. A great number of methods based on different approaches to this issue was created. One of these approaches utilizes the findings of the graph theory. Our work focuses on segmentation using shortest paths in a graph. Specifically, we deal with methods of "Intelligent Scissors," which use Dijkstra's algorithm to find the shortest paths. We created a new software in Microsoft Visual Studio 2013 integrated development environment Visual C++ in the language C++/CLI. We created a format application with a graphical users development environment for system Windows, with using the platform .Net (version 4.5). The program was used for handling and processing the original medical data. The major disadvantage of the method of "Intelligent Scissors" is the computational time length of Dijkstra's algorithm. However, after the implementation of a more efficient priority queue, this problem could be alleviated. The main advantage of this method we see in training that enables to adapt to a particular kind of edge, which we need to segment. The user involvement has a significant influence on the process of segmentation, which enormously aids to achieve high-quality results (Fig. 7, Ref. 13).

  10. A multi-segment soft actuator for biomedical applications based on IPMCs

    NASA Astrophysics Data System (ADS)

    Zhao, Dongxu; Wang, Yanjie; Liu, Jiayu; Luo, Meng; Li, Dichen; Chen, Hualing

    2015-04-01

    With rapid progress of biomedical devices towards miniaturization, flexibility, multifunction and low cost, the restrictions of traditional mechanical structures become particularly apparent, while soft materials become research focus in broad fields. As one of the most attractive soft materials, Ionic Polymer-Metal Composite (IPMC) is widely used as artificial muscles and actuators, with the advantages of low driving-voltage, high efficiency of electromechanical transduction and functional stabilization. In this paper, a new intuitive control method was presented to achieve the omnidirectional bending movements and was applied on a representative actuation structure of a multi-degree-offreedom soft actuator composed of two segments bar-shaped IPMC with a square cross section. Firstly, the bar-shaped IPMCs were fabricated by the solution casting method, reducing plating, autocatalytic plating method and cut into shapes successively. The connectors of the multi-segment IPMC actuator were fabricated by 3D printing. Then, a new control method was introduced to realize the intuitive mapping relationship between the actuator and the joystick manipulator. The control circuit was designed and tested. Finally, the multi-degree-of-freedom actuator of 2 segments bar-shaped IPMCs was implemented and omnidirectional bending movements were achieved, which could be a promising actuator for biomedical applications, such as endoscope, catheterism, laparoscopy and the surgical resection of tumors.

  11. Modeling oxygen consumption in the proximal tubule: effects of NHE and SGLT2 inhibition

    PubMed Central

    Vallon, Volker; Edwards, Aurélie

    2015-01-01

    The objective of this study was to investigate how physiological, pharmacological, and pathological conditions that alter sodium reabsorption (TNa) in the proximal tubule affect oxygen consumption (QO2) and Na+ transport efficiency (TNa/QO2). To do so, we expanded a mathematical model of solute transport in the proximal tubule of the rat kidney. The model represents compliant S1, S2, and S3 segments and accounts for their specific apical and basolateral transporters. Sodium is reabsorbed transcellularly, via apical Na+/H+ exchangers (NHE) and Na+-glucose (SGLT) cotransporters, and paracellularly. Our results suggest that TNa/QO2 is 80% higher in S3 than in S1–S2 segments, due to the greater contribution of the passive paracellular pathway to TNa in the former segment. Inhibition of NHE or Na-K-ATPase reduced TNa and QO2, as well as Na+ transport efficiency. SGLT2 inhibition also reduced proximal tubular TNa but increased QO2; these effects were relatively more pronounced in the S3 vs. the S1–S2 segments. Diabetes increased TNa and QO2 and reduced TNa/QO2, owing mostly to hyperfiltration. Since SGLT2 inhibition lowers diabetic hyperfiltration, the net effect on TNa, QO2, and Na+ transport efficiency in the proximal tubule will largely depend on the individual extent to which glomerular filtration rate is lowered. PMID:25855513

  12. Level set method with automatic selective local statistics for brain tumor segmentation in MR images.

    PubMed

    Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak

    2013-01-01

    The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  13. EMSAR: estimation of transcript abundance from RNA-seq data by mappability-based segmentation and reclustering.

    PubMed

    Lee, Soohyun; Seo, Chae Hwa; Alver, Burak Han; Lee, Sanghyuk; Park, Peter J

    2015-09-03

    RNA-seq has been widely used for genome-wide expression profiling. RNA-seq data typically consists of tens of millions of short sequenced reads from different transcripts. However, due to sequence similarity among genes and among isoforms, the source of a given read is often ambiguous. Existing approaches for estimating expression levels from RNA-seq reads tend to compromise between accuracy and computational cost. We introduce a new approach for quantifying transcript abundance from RNA-seq data. EMSAR (Estimation by Mappability-based Segmentation And Reclustering) groups reads according to the set of transcripts to which they are mapped and finds maximum likelihood estimates using a joint Poisson model for each optimal set of segments of transcripts. The method uses nearly all mapped reads, including those mapped to multiple genes. With an efficient transcriptome indexing based on modified suffix arrays, EMSAR minimizes the use of CPU time and memory while achieving accuracy comparable to the best existing methods. EMSAR is a method for quantifying transcripts from RNA-seq data with high accuracy and low computational cost. EMSAR is available at https://github.com/parklab/emsar.

  14. Generation, recognition, and consistent fusion of partial boundary representations from range images

    NASA Astrophysics Data System (ADS)

    Kohlhepp, Peter; Hanczak, Andrzej M.; Li, Gang

    1994-10-01

    This paper presents SOMBRERO, a new system for recognizing and locating 3D, rigid, non- moving objects from range data. The objects may be polyhedral or curved, partially occluding, touching or lying flush with each other. For data collection, we employ 2D time- of-flight laser scanners mounted to a moving gantry robot. By combining sensor and robot coordinates, we obtain 3D cartesian coordinates. Boundary representations (Brep's) provide view independent geometry models that are both efficiently recognizable and derivable automatically from sensor data. SOMBRERO's methods for generating, matching and fusing Brep's are highly synergetic. A split-and-merge segmentation algorithm with dynamic triangular builds a partial (21/2D) Brep from scattered data. The recognition module matches this scene description with a model database and outputs recognized objects, their positions and orientations, and possibly surfaces corresponding to unknown objects. We present preliminary results in scene segmentation and recognition. Partial Brep's corresponding to different range sensors or viewpoints can be merged into a consistent, complete and irredundant 3D object or scene model. This fusion algorithm itself uses the recognition and segmentation methods.

  15. Electrical, Thermal, and Mechanical Characterization of Novel Segmented-Leg Thermoelectric Modules

    NASA Astrophysics Data System (ADS)

    D'Angelo, Jonathan; Case, Eldon D.; Matchanov, Nuraddin; Wu, Chun-I.; Hogan, Timothy P.; Barnard, James; Cauchy, Charles; Hendricks, Terry; Kanatzidis, Mercouri G.

    2011-10-01

    In this paper we report on the electrical, thermal, and mechanical characterization of segmented-leg PbTe-based thermoelectric modules. This work featured a thermoelectric module measurement system that was constructed and used to measure 47-couple segmented thermoelectric power generation modules fabricated by Tellurex Corporation using n-type Bi2Te3- x Se x to Ag0.86Pb19+ x SbTe20 legs and p-type Bi x Sb2- x Te3 to Ag0.9Pb9Sn9Sb0.6Te20 legs. The modules were measured under vacuum with hot-side and cold-side temperatures of approximately 670 K and 312 K, respectively. In addition, the measurements on the PbTe-based materials are compared with measurements performed on Bi2Te3 reference modules. Efficiency values as high as 6.56% were measured on these modules. In addition to the measurement system description and the measurement results on these modules, infrared images of the modules that were used to help identify nonuniformities are also presented.

  16. A moving hum filter to suppress rotor noise in high-resolution airborne magnetic data

    USGS Publications Warehouse

    Xia, J.; Doll, W.E.; Miller, R.D.; Gamey, T.J.; Emond, A.M.

    2005-01-01

    A unique filtering approach is developed to eliminate helicopter rotor noise. It is designed to suppress harmonic noise from a rotor that varies slightly in amplitude, phase, and frequency and that contaminates aero-magnetic data. The filter provides a powerful harmonic noise-suppression tool for data acquired with modern large-dynamic-range recording systems. This three-step approach - polynomial fitting, bandpass filtering, and rotor-noise synthesis - significantly reduces rotor noise without altering the spectra of signals of interest. Two steps before hum filtering - polynomial fitting and bandpass filtering - are critical to accurately model the weak rotor noise. During rotor-noise synthesis, amplitude, phase, and frequency are determined. Data are processed segment by segment so that there is no limit on the length of data. The segment length changes dynamically along a line based on modeling results. Modeling the rotor noise is stable and efficient. Real-world data examples demonstrate that this method can suppress rotor noise by more than 95% when implemented in an aeromagnetic data-processing flow. ?? 2005 Society of Exploration Geophysicists. All rights reserved.

  17. Online measurement for geometrical parameters of wheel set based on structure light and CUDA parallel processing

    NASA Astrophysics Data System (ADS)

    Wu, Kaihua; Shao, Zhencheng; Chen, Nian; Wang, Wenjie

    2018-01-01

    The wearing degree of the wheel set tread is one of the main factors that influence the safety and stability of running train. Geometrical parameters mainly include flange thickness and flange height. Line structure laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD and CUDA parallel processing unit. The image acquisition was fulfilled by hardware interrupt mode. A high efficiency parallel segmentation algorithm based on CUDA was proposed. The algorithm firstly divides the image into smaller squares, and extracts the squares of the target by fusion of k_means and STING clustering image segmentation algorithm. Segmentation time is less than 0.97ms. A considerable acceleration ratio compared with the CPU serial calculation was obtained, which greatly improved the real-time image processing capacity. When wheel set was running in a limited speed, the system placed alone railway line can measure the geometrical parameters automatically. The maximum measuring speed is 120km/h.

  18. Multistage Coupling of Laser-Wakefield Accelerators with Curved Plasma Channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, J.; Chen, M.; Wu, W. Y.

    Multistage coupling of laser-wakefield accelerators is essential to overcome laser energy depletion for high-energy applications such as TeV level electron-positron colliders. Current staging schemes feed subsequent laser pulses into stages using plasma mirrors, while controlling electron beam focusing with plasma lenses. Here a more compact and efficient scheme is proposed to realize simultaneous coupling of the electron beam and the laser pulse into a second stage. Furthermore, a curved channel with transition segment is used to guide a fresh laser pulse into a subsequent straight channel, while allowing the electrons to propagate in a straight channel. This scheme then benefitsmore » from a shorter coupling distance and continuous guiding of the electrons in plasma, while suppressing transverse beam dispersion. Within moderate laser parameters, particle-in-cell simulations demonstrate that the electron beam from a previous stage can be efficiently injected into a subsequent stage for further acceleration, while maintaining high capture efficiency, stability, and beam quality.« less

  19. Multistage Coupling of Laser-Wakefield Accelerators with Curved Plasma Channel

    DOE PAGES

    Luo, J.; Chen, M.; Wu, W. Y.; ...

    2018-04-10

    Multistage coupling of laser-wakefield accelerators is essential to overcome laser energy depletion for high-energy applications such as TeV level electron-positron colliders. Current staging schemes feed subsequent laser pulses into stages using plasma mirrors, while controlling electron beam focusing with plasma lenses. Here a more compact and efficient scheme is proposed to realize simultaneous coupling of the electron beam and the laser pulse into a second stage. Furthermore, a curved channel with transition segment is used to guide a fresh laser pulse into a subsequent straight channel, while allowing the electrons to propagate in a straight channel. This scheme then benefitsmore » from a shorter coupling distance and continuous guiding of the electrons in plasma, while suppressing transverse beam dispersion. Within moderate laser parameters, particle-in-cell simulations demonstrate that the electron beam from a previous stage can be efficiently injected into a subsequent stage for further acceleration, while maintaining high capture efficiency, stability, and beam quality.« less

  20. Broadband Solar Energy Harvesting in Single Nanowire Resonators

    NASA Astrophysics Data System (ADS)

    Yang, Yiming; Peng, Xingyue; Hyatt, Steven; Yu, Dong

    2015-03-01

    Sub-wavelength semiconductor nanowires (NWs) can have optical absorption cross sections far beyond their physical sizes at resonance frequencies, offering a powerful method to simultaneously lower the material consumption and enhance photovoltaic performance. The degree of absorption enhancement is expected to substantially increase in materials with high refractive indices, but this has not yet been experimentally demonstrated. Here, we show that the absorption efficiency can be significantly improved in high-index NWs, by a direct observation of 350% external quantum efficiency (EQE) in lead sulfide (PbS) NWs. Broadband absorption enhancement is also realized in tapered NWs, where light of different wavelength is absorbed at segments with different diameters analogous to a tandem solar cell. Our results quantitatively agree with the finite-difference-time-domain (FDTD) simulations. Overall, our single PbS NW Schottky solar cells taking advantage of optical resonance, near bandgap open circuit voltage, and long minority carrier diffusion length exhibit power conversion efficiency comparable to single Si NW coaxial p-n junction cells, while the fabrication complexity is greatly reduced.

  1. Dynamic programming in parallel boundary detection with application to ultrasound intima-media segmentation.

    PubMed

    Zhou, Yuan; Cheng, Xinyao; Xu, Xiangyang; Song, Enmin

    2013-12-01

    Segmentation of carotid artery intima-media in longitudinal ultrasound images for measuring its thickness to predict cardiovascular diseases can be simplified as detecting two nearly parallel boundaries within a certain distance range, when plaque with irregular shapes is not considered. In this paper, we improve the implementation of two dynamic programming (DP) based approaches to parallel boundary detection, dual dynamic programming (DDP) and piecewise linear dual dynamic programming (PL-DDP). Then, a novel DP based approach, dual line detection (DLD), which translates the original 2-D curve position to a 4-D parameter space representing two line segments in a local image segment, is proposed to solve the problem while maintaining efficiency and rotation invariance. To apply the DLD to ultrasound intima-media segmentation, it is imbedded in a framework that employs an edge map obtained from multiplication of the responses of two edge detectors with different scales and a coupled snake model that simultaneously deforms the two contours for maintaining parallelism. The experimental results on synthetic images and carotid arteries of clinical ultrasound images indicate improved performance of the proposed DLD compared to DDP and PL-DDP, with respect to accuracy and efficiency. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. High Temperature Water Heat Pipes Radiator for a Brayton Space Reactor Power System

    NASA Astrophysics Data System (ADS)

    El-Genk, Mohamed S.; Tournier, Jean-Michel

    2006-01-01

    A high temperature water heat pipes radiator design is developed for a space power system with a sectored gas-cooled reactor and three Closed Brayton Cycle (CBC) engines, for avoidance of single point failures in reactor cooling and energy conversion and rejection. The CBC engines operate at turbine inlet and exit temperatures of 1144 K and 952 K. They have a net efficiency of 19.4% and each provides 30.5 kWe of net electrical power to the load. A He-Xe gas mixture serves as the turbine working fluid and cools the reactor core, entering at 904 K and exiting at 1149 K. Each CBC loop is coupled to a reactor sector, which is neutronically and thermally coupled, but hydraulically decoupled to the other two sectors, and to a NaK-78 secondary loop with two water heat pipes radiator panels. The segmented panels each consist of a forward fixed segment and two rear deployable segments, operating hydraulically in parallel. The deployed radiator has an effective surface area of 203 m2, and when the rear segments are folded, the stowed power system fits in the launch bay of the DELTA-IV Heavy launch vehicle. For enhanced reliability, the water heat pipes operate below 50% of their wicking limit; the sonic limit is not a concern because of the water, high vapor pressure at the temperatures of interest (384 - 491 K). The rejected power by the radiator peaks when the ratio of the lengths of evaporator sections of the longest and shortest heat pipes is the same as that of the major and minor widths of the segments. The shortest and hottest heat pipes in the rear segments operate at 491 K and 2.24 MPa, and each rejects 154 W. The longest heat pipes operate cooler (427 K and 0.52 MPa) and because they are 69% longer, reject more power (200 W each). The longest and hottest heat pipes in the forward segments reject the largest power (320 W each) while operating at ~ 46% of capillary limit. The vapor temperature and pressure in these heat pipes are 485 K and 1.97 MPa. By contrast, the shortest water heat pipes in the forward segments operate much cooler (427 K and 0.52 MPa), and reject a much lower power of 45 W each. The radiator with six fixed and 12 rear deployable segments rejects a total of 324 kWth, weights 994 kg and has an average specific power of 326 Wth/kg and a specific mass of 5.88 kg/m2.

  3. Novel multimodality segmentation using level sets and Jensen-Rényi divergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markel, Daniel, E-mail: daniel.markel@mail.mcgill.ca; Zaidi, Habib; Geneva Neuroscience Center, Geneva University, CH-1205 Geneva

    2013-12-15

    Purpose: Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. Methods: A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set activemore » contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. Results: The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with aR{sup 2} value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. Conclusions: The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.« less

  4. Novel multimodality segmentation using level sets and Jensen-Rényi divergence.

    PubMed

    Markel, Daniel; Zaidi, Habib; El Naqa, Issam

    2013-12-01

    Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set active contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with a R(2) value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.

  5. International Space Station (ISS) Bacterial Filter Elements (BFEs): Filter Efficiency and Pressure Testing of Returned Units

    NASA Technical Reports Server (NTRS)

    Green, Robert D.; Agui, Juan H.; Vijayakumar, R.

    2017-01-01

    The air revitalization system aboard the International Space Station (ISS) provides the vital function of maintaining a clean cabin environment for the crew and the hardware. This becomes a serious challenge in pressurized space compartments since no outside air ventilation is possible, and a larger particulate load is imposed on the filtration system due to lack of sedimentation due to the microgravity environment in Low Earth Orbit (LEO). The ISS Environmental Control and Life Support (ECLS) system architecture in the U.S. Segment uses a distributed particulate filtration approach consisting of traditional High-Efficiency Particulate Adsorption (HEPA) media filters deployed at multiple locations in each U.S. Segment module; these filters are referred to as Bacterial Filter Elements, or BFEs. These filters see a replacement interval, as part of maintenance, of 2-5 years dependent on location in the ISS. In this work, we present particulate removal efficiency, pressure drop, and leak test results for a sample set of 8 BFEs returned from the ISS after filter replacement. The results can potentially be utilized by the ISS Program to ascertain whether the present replacement interval can be maintained or extended to balance the on-ground filter inventory with extension of the lifetime of ISS beyond 2024. These results can also provide meaningful guidance for particulate filter designs under consideration for future deep space exploration missions.

  6. A Microfabricated Segmented-Involute-Foil Regenerator for Enhancing Reliability and Performance of Stirling Engines. Phase III Final Report for the Radioisotope Power Conversion Technology NRA

    NASA Technical Reports Server (NTRS)

    Ibrahim, Mounir B.; Gedeon, David; Wood, Gary; McLean, Jeffrey

    2009-01-01

    Under Phase III of NASA Research Announcement contract NAS3-03124, a prototype nickel segmented-involute-foil regenerator was microfabricated and tested in a Sunpower Frequency-Test-Bed (FTB) Stirling convertor. The team for this effort consisted of Cleveland State University, Gedeon Associates, Sunpower Inc. and International Mezzo Technologies. Testing in the FTB convertor produced about the same efficiency as testing with the original random-fiber regenerator. But the high thermal conductivity of the prototype nickel regenerator was responsible for a significant performance degradation. An efficiency improvement (by a 1.04 factor, according to computer predictions) could have been achieved if the regenerator was made from a low-conductivity material. Also, the FTB convertor was not reoptimized to take full advantage of the microfabricated regenerator s low flow resistance; thus, the efficiency would likely have been even higher had the FTB been completely reoptimized. This report discusses the regenerator microfabrication process, testing of the regenerator in the Stirling FTB convertor, and the supporting analysis. Results of the pre-test computational fluid dynamics (CFD) modeling of the effects of the regenerator-test-configuration diffusers (located at each end of the regenerator) are included. The report also includes recommendations for further development of involute-foil regenerators from a higher-temperature material than nickel.

  7. 3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study.

    PubMed

    Dolz, Jose; Desrosiers, Christian; Ben Ayed, Ismail

    2018-04-15

    This study investigates a 3D and fully convolutional neural network (CNN) for subcortical brain structure segmentation in MRI. 3D CNN architectures have been generally avoided due to their computational and memory requirements during inference. We address the problem via small kernels, allowing deeper architectures. We further model both local and global context by embedding intermediate-layer outputs in the final prediction, which encourages consistency between features extracted at different scales and embeds fine-grained information directly in the segmentation process. Our model is efficiently trained end-to-end on a graphics processing unit (GPU), in a single stage, exploiting the dense inference capabilities of fully CNNs. We performed comprehensive experiments over two publicly available datasets. First, we demonstrate a state-of-the-art performance on the ISBR dataset. Then, we report a large-scale multi-site evaluation over 1112 unregistered subject datasets acquired from 17 different sites (ABIDE dataset), with ages ranging from 7 to 64 years, showing that our method is robust to various acquisition protocols, demographics and clinical factors. Our method yielded segmentations that are highly consistent with a standard atlas-based approach, while running in a fraction of the time needed by atlas-based methods and avoiding registration/normalization steps. This makes it convenient for massive multi-site neuroanatomical imaging studies. To the best of our knowledge, our work is the first to study subcortical structure segmentation on such large-scale and heterogeneous data. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Benchmark for license plate character segmentation

    NASA Astrophysics Data System (ADS)

    Gonçalves, Gabriel Resende; da Silva, Sirlene Pio Gomes; Menotti, David; Shwartz, William Robson

    2016-09-01

    Automatic license plate recognition (ALPR) has been the focus of many researches in the past years. In general, ALPR is divided into the following problems: detection of on-track vehicles, license plate detection, segmentation of license plate characters, and optical character recognition (OCR). Even though commercial solutions are available for controlled acquisition conditions, e.g., the entrance of a parking lot, ALPR is still an open problem when dealing with data acquired from uncontrolled environments, such as roads and highways when relying only on imaging sensors. Due to the multiple orientations and scales of the license plates captured by the camera, a very challenging task of the ALPR is the license plate character segmentation (LPCS) step, because its effectiveness is required to be (near) optimal to achieve a high recognition rate by the OCR. To tackle the LPCS problem, this work proposes a benchmark composed of a dataset designed to focus specifically on the character segmentation step of the ALPR within an evaluation protocol. Furthermore, we propose the Jaccard-centroid coefficient, an evaluation measure more suitable than the Jaccard coefficient regarding the location of the bounding box within the ground-truth annotation. The dataset is composed of 2000 Brazilian license plates consisting of 14000 alphanumeric symbols and their corresponding bounding box annotations. We also present a straightforward approach to perform LPCS efficiently. Finally, we provide an experimental evaluation for the dataset based on five LPCS approaches and demonstrate the importance of character segmentation for achieving an accurate OCR.

  9. Integrating atlas and graph cut methods for right ventricle blood-pool segmentation from cardiac cine MRI

    NASA Astrophysics Data System (ADS)

    Dangi, Shusil; Linte, Cristian A.

    2017-03-01

    Segmentation of right ventricle from cardiac MRI images can be used to build pre-operative anatomical heart models to precisely identify regions of interest during minimally invasive therapy. Furthermore, many functional parameters of right heart such as right ventricular volume, ejection fraction, myocardial mass and thickness can also be assessed from the segmented images. To obtain an accurate and computationally efficient segmentation of right ventricle from cardiac cine MRI, we propose a segmentation algorithm formulated as an energy minimization problem in a graph. Shape prior obtained by propagating label from an average atlas using affine registration is incorporated into the graph framework to overcome problems in ill-defined image regions. The optimal segmentation corresponding to the labeling with minimum energy configuration of the graph is obtained via graph-cuts and is iteratively refined to produce the final right ventricle blood pool segmentation. We quantitatively compare the segmentation results obtained from our algorithm to the provided gold-standard expert manual segmentation for 16 cine-MRI datasets available through the MICCAI 2012 Cardiac MR Right Ventricle Segmentation Challenge according to several similarity metrics, including Dice coefficient, Jaccard coefficient, Hausdorff distance, and Mean absolute distance error.

  10. A new user-assisted segmentation and tracking technique for an object-based video editing system

    NASA Astrophysics Data System (ADS)

    Yu, Hong Y.; Hong, Sung-Hoon; Lee, Mike M.; Choi, Jae-Gark

    2004-03-01

    This paper presents a semi-automatic segmentation method which can be used to generate video object plane (VOP) for object based coding scheme and multimedia authoring environment. Semi-automatic segmentation can be considered as a user-assisted segmentation technique. A user can initially mark objects of interest around the object boundaries and then the user-guided and selected objects are continuously separated from the unselected areas through time evolution in the image sequences. The proposed segmentation method consists of two processing steps: partially manual intra-frame segmentation and fully automatic inter-frame segmentation. The intra-frame segmentation incorporates user-assistance to define the meaningful complete visual object of interest to be segmentation and decides precise object boundary. The inter-frame segmentation involves boundary and region tracking to obtain temporal coherence of moving object based on the object boundary information of previous frame. The proposed method shows stable efficient results that could be suitable for many digital video applications such as multimedia contents authoring, content based coding and indexing. Based on these results, we have developed objects based video editing system with several convenient editing functions.

  11. Test of superplastically formed corrugated aluminum compression specimens with beaded webs

    NASA Technical Reports Server (NTRS)

    Davis, Randall C.; Royster, Dick M.; Bales, Thomas T.; James, William F.; Shinn, Joseph M., Jr.

    1991-01-01

    Corrugated wall sections provide a highly efficient structure for carrying compressive loads in aircraft and spacecraft fuselages. The superplastic forming (SPF) process offers a means to produce complex shells and panels with corrugated wall shapes. A study was made to investigate the feasibility of superplastically forming 7475-T6 aluminum sheet into a corrugated wall configuration and to demonstrate the structural integrity of the construction by testing. The corrugated configuration selected has beaded web segments separating curved-cap segments. Eight test specimens were fabricated. Two specimens were simply a single sheet of aluminum superplastically formed to a beaded-web, curved-cap corrugation configuration. Six specimens were single-sheet corrugations modified by adhesive bonding additional sheet material to selectively reinforce the curved-cap portion of the corrugation. The specimens were tested to failure by crippling in end compression at room temperature.

  12. Peculiarities of the third natural frequency vibrations of a cantilever for the improvement of energy harvesting.

    PubMed

    Ostasevicius, Vytautas; Janusas, Giedrius; Milasauskaite, Ieva; Zilys, Mindaugas; Kizauskiene, Laura

    2015-05-28

    This paper focuses on several aspects extending the dynamical efficiency of a cantilever beam vibrating in the third mode. A few ways of producing this mode stimulation, namely vibro-impact or forced excitation, as well as its application for energy harvesting devices are proposed. The paper presents numerical and experimental analyses of novel structural dynamics effects along with an optimal configuration of the cantilever beam. The peculiarities of a cantilever beam vibrating in the third mode are related to the significant increase of the level of deformations capable of extracting significant additional amounts of energy compared to the conventional harvester vibrating in the first mode. Two types of a piezoelectric vibrating energy harvester (PVEH) prototype are analysed in this paper: the first one without electrode segmentation, while the second is segmented using electrode segmentation at the strain nodes of the third vibration mode to achieve effective operation at the third resonant frequency. The results of this research revealed that the voltage generated by any segment of the segmented PVEH prototype excited at the third resonant frequency demonstrated a 3.4-4.8-fold increase in comparison with the non-segmented prototype. Simultaneously, the efficiency of the energy harvester prototype also increased at lower resonant frequencies from 16% to 90%. The insights presented in the paper may serve for the development and fabrication of advanced piezoelectric energy harvesters which would be able to generate a considerably increased amount of electrical energy independently of the frequency of kinematical excitation.

  13. Generation of chemical movies: FT-IR spectroscopic imaging of segmented flows.

    PubMed

    Chan, K L Andrew; Niu, X; deMello, A J; Kazarian, S G

    2011-05-01

    We have previously demonstrated that FT-IR spectroscopic imaging can be used as a powerful, label-free detection method for studying laminar flows. However, to date, the speed of image acquisition has been too slow for the efficient detection of moving droplets within segmented flow systems. In this paper, we demonstrate the extraction of fast FT-IR images with acquisition times of 50 ms. This approach allows efficient interrogation of segmented flow systems where aqueous droplets move at a speed of 2.5 mm/s. Consecutive FT-IR images separated by 120 ms intervals allow the generation of chemical movies at eight frames per second. The technique has been applied to the study of microfluidic systems containing moving droplets of water in oil and droplets of protein solution in oil. The presented work demonstrates the feasibility of the use of FT-IR imaging to study dynamic systems with subsecond temporal resolution.

  14. Fast globally optimal segmentation of 3D prostate MRI with axial symmetry prior.

    PubMed

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2013-01-01

    We propose a novel global optimization approach to segmenting a given 3D prostate T2w magnetic resonance (MR) image, which enforces the inherent axial symmetry of the prostate shape and simultaneously performs a sequence of 2D axial slice-wise segmentations with a global 3D coherence prior. We show that the proposed challenging combinatorial optimization problem can be solved globally and exactly by means of convex relaxation. With this regard, we introduce a novel coupled continuous max-flow model, which is dual to the studied convex relaxed optimization formulation and leads to an efficient multiplier augmented algorithm based on the modern convex optimization theory. Moreover, the new continuous max-flow based algorithm was implemented on GPUs to achieve a substantial improvement in computation. Experimental results using public and in-house datasets demonstrate great advantages of the proposed method in terms of both accuracy and efficiency.

  15. Segmentation of white blood cells and comparison of cell morphology by linear and naïve Bayes classifiers.

    PubMed

    Prinyakupt, Jaroonrut; Pluempitiwiriyawej, Charnchai

    2015-06-30

    Blood smear microscopic images are routinely investigated by haematologists to diagnose most blood diseases. However, the task is quite tedious and time consuming. An automatic detection and classification of white blood cells within such images can accelerate the process tremendously. In this paper we propose a system to locate white blood cells within microscopic blood smear images, segment them into nucleus and cytoplasm regions, extract suitable features and finally, classify them into five types: basophil, eosinophil, neutrophil, lymphocyte and monocyte. Two sets of blood smear images were used in this study's experiments. Dataset 1, collected from Rangsit University, were normal peripheral blood slides under light microscope with 100× magnification; 555 images with 601 white blood cells were captured by a Nikon DS-Fi2 high-definition color camera and saved in JPG format of size 960 × 1,280 pixels at 15 pixels per 1 μm resolution. In dataset 2, 477 cropped white blood cell images were downloaded from CellaVision.com. They are in JPG format of size 360 × 363 pixels. The resolution is estimated to be 10 pixels per 1 μm. The proposed system comprises a pre-processing step, nucleus segmentation, cell segmentation, feature extraction, feature selection and classification. The main concept of the segmentation algorithm employed uses white blood cell's morphological properties and the calibrated size of a real cell relative to image resolution. The segmentation process combined thresholding, morphological operation and ellipse curve fitting. Consequently, several features were extracted from the segmented nucleus and cytoplasm regions. Prominent features were then chosen by a greedy search algorithm called sequential forward selection. Finally, with a set of selected prominent features, both linear and naïve Bayes classifiers were applied for performance comparison. This system was tested on normal peripheral blood smear slide images from two datasets. Two sets of comparison were performed: segmentation and classification. The automatically segmented results were compared to the ones obtained manually by a haematologist. It was found that the proposed method is consistent and coherent in both datasets, with dice similarity of 98.9 and 91.6% for average segmented nucleus and cell regions, respectively. Furthermore, the overall correction rate in the classification phase is about 98 and 94% for linear and naïve Bayes models, respectively. The proposed system, based on normal white blood cell morphology and its characteristics, was applied to two different datasets. The results of the calibrated segmentation process on both datasets are fast, robust, efficient and coherent. Meanwhile, the classification of normal white blood cells into five types shows high sensitivity in both linear and naïve Bayes models, with slightly better results in the linear classifier.

  16. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images.

    PubMed

    Boix, Macarena; Cantó, Begoña

    2013-04-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.

  17. Sample Training Based Wildfire Segmentation by 2D Histogram θ-Division with Minimum Error

    PubMed Central

    Dong, Erqian; Sun, Mingui; Jia, Wenyan; Zhang, Dengyi; Yuan, Zhiyong

    2013-01-01

    A novel wildfire segmentation algorithm is proposed with the help of sample training based 2D histogram θ-division and minimum error. Based on minimum error principle and 2D color histogram, the θ-division methods were presented recently, but application of prior knowledge on them has not been explored. For the specific problem of wildfire segmentation, we collect sample images with manually labeled fire pixels. Then we define the probability function of error division to evaluate θ-division segmentations, and the optimal angle θ is determined by sample training. Performances in different color channels are compared, and the suitable channel is selected. To further improve the accuracy, the combination approach is presented with both θ-division and other segmentation methods such as GMM. Our approach is tested on real images, and the experiments prove its efficiency for wildfire segmentation. PMID:23878526

  18. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.

    1999-01-01

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.

  19. Efficient brain lesion segmentation using multi-modality tissue-based feature selection and support vector machines.

    PubMed

    Fiot, Jean-Baptiste; Cohen, Laurent D; Raniga, Parnesh; Fripp, Jurgen

    2013-09-01

    Support vector machines (SVM) are machine learning techniques that have been used for segmentation and classification of medical images, including segmentation of white matter hyper-intensities (WMH). Current approaches using SVM for WMH segmentation extract features from the brain and classify these followed by complex post-processing steps to remove false positives. The method presented in this paper combines advanced pre-processing, tissue-based feature selection and SVM classification to obtain efficient and accurate WMH segmentation. Features from 125 patients, generated from up to four MR modalities [T1-w, T2-w, proton-density and fluid attenuated inversion recovery(FLAIR)], differing neighbourhood sizes and the use of multi-scale features were compared. We found that although using all four modalities gave the best overall classification (average Dice scores of 0.54  ±  0.12, 0.72  ±  0.06 and 0.82  ±  0.06 respectively for small, moderate and severe lesion loads); this was not significantly different (p = 0.50) from using just T1-w and FLAIR sequences (Dice scores of 0.52  ±  0.13, 0.71  ±  0.08 and 0.81  ±  0.07). Furthermore, there was a negligible difference between using 5 × 5 × 5 and 3 × 3 × 3 features (p = 0.93). Finally, we show that careful consideration of features and pre-processing techniques not only saves storage space and computation time but also leads to more efficient classification, which outperforms the one based on all features with post-processing. Copyright © 2013 John Wiley & Sons, Ltd.

  20. Automated Segmentation of High-Resolution Photospheric Images of Active Regions

    NASA Astrophysics Data System (ADS)

    Yang, Meng; Tian, Yu; Rao, Changhui

    2018-02-01

    Due to the development of ground-based, large-aperture solar telescopes with adaptive optics (AO) resulting in increasing resolving ability, more accurate sunspot identifications and characterizations are required. In this article, we have developed a set of automated segmentation methods for high-resolution solar photospheric images. Firstly, a local-intensity-clustering level-set method is applied to roughly separate solar granulation and sunspots. Then reinitialization-free level-set evolution is adopted to adjust the boundaries of the photospheric patch; an adaptive intensity threshold is used to discriminate between umbra and penumbra; light bridges are selected according to their regional properties from candidates produced by morphological operations. The proposed method is applied to the solar high-resolution TiO 705.7-nm images taken by the 151-element AO system and Ground-Layer Adaptive Optics prototype system at the 1-m New Vacuum Solar Telescope of the Yunnan Observatory. Experimental results show that the method achieves satisfactory robustness and efficiency with low computational cost on high-resolution images. The method could also be applied to full-disk images, and the calculated sunspot areas correlate well with the data given by the National Oceanic and Atmospheric Administration (NOAA).

  1. Highly efficient retinal metabolism in cones

    PubMed Central

    Miyazono, Sadaharu; Shimauchi-Matsukawa, Yoshie; Tachibanaki, Shuji; Kawamura, Satoru

    2008-01-01

    After bleaching of visual pigment in vertebrate photoreceptors, all-trans retinal is reduced to all-trans retinol by retinol dehydrogenases (RDHs). We investigated this reaction in purified carp rods and cones, and we found that the reducing activity toward all-trans retinal in the outer segment (OS) of cones is >30 times higher than that of rods. The high activity of RDHs was attributed to high content of RDH8 in cones. In the inner segment (IS) in both rods and cones, RDH8L2 and RDH13 were found to be the major enzymes among RDH family proteins. We further found a previously undescribed and effective pathway to convert 11-cis retinol to 11-cis retinal in cones: this oxidative conversion did not require NADP+ and instead was coupled with reduction of all-trans retinal to all-trans retinol. The activity was >50 times effective than the oxidizing activity of RDHs that require NADP+. These highly effective reactions of removal of all-trans retinal by RDH8 and production of 11-cis retinal by the coupling reaction are probably the underlying mechanisms that ensure effective visual pigment regeneration in cones that function under much brighter light conditions than rods. PMID:18836074

  2. VAR2CSA signatures of high Plasmodium falciparum parasitemia in the placenta.

    PubMed

    Rovira-Vallbona, Eduard; Monteiro, Isadora; Bardají, Azucena; Serra-Casas, Elisa; Neafsey, Daniel E; Quelhas, Diana; Valim, Clarissa; Alonso, Pedro; Dobaño, Carlota; Ordi, Jaume; Menéndez, Clara; Mayor, Alfredo

    2013-01-01

    Plasmodium falciparum infected erythrocytes (IE) accumulate in the placenta through the interaction between Duffy-binding like (DBL) domains of parasite-encoded ligand VAR2CSA and chondroitin sulphate-A (CSA) receptor. Polymorphisms in these domains, including DBL2X and DBL3X, may affect their antigenicity or CSA-binding affinity, eventually increasing parasitemia and its adverse effects on pregnancy outcomes. A total of 373 DBL2X and 328 DBL3X sequences were obtained from transcripts of 20 placental isolates infecting Mozambican women, resulting in 176 DBL2X and 191 DBL3X unique sequences at the protein level. Sequence alignments were divided in segments containing combinations of correlated polymorphisms and the association of segment sequences with placental parasite density was tested using Bonferroni corrected regression models, taking into consideration the weight of each sequence in the infection. Three DBL2X and three DBL3X segments contained signatures of high parasite density (P<0.003) that were highly prevalent in the parasite population (49-91%). Identified regions included a flexible loop that contributes to DBL3X-CSA interaction and two DBL3X motifs with evidence of positive natural selection. Limited antibody responses against signatures of high parasite density among malaria-exposed pregnant women could not explain the increased placental parasitemia. These results suggest that a higher binding efficiency to CSA rather than reduced antigenicity might provide a biological advantage to parasites with high parasite density signatures in VAR2CSA. Sequences contributing to high parasitemia may be critical for the functional characterization of VAR2CSA and the development of tools against placental malaria.

  3. Integrating Compact Constraint and Distance Regularization with Level Set for Hepatocellular Carcinoma (HCC) Segmentation on Computed Tomography (CT) Images

    NASA Astrophysics Data System (ADS)

    Gui, Luying; He, Jian; Qiu, Yudong; Yang, Xiaoping

    2017-01-01

    This paper presents a variational level set approach to segment lesions with compact shapes on medical images. In this study, we investigate to address the problem of segmentation for hepatocellular carcinoma which are usually of various shapes, variable intensities, and weak boundaries. An efficient constraint which is called the isoperimetric constraint to describe the compactness of shapes is applied in this method. In addition, in order to ensure the precise segmentation and stable movement of the level set, a distance regularization is also implemented in the proposed variational framework. Our method is applied to segment various hepatocellular carcinoma regions on Computed Tomography images with promising results. Comparison results also prove that the proposed method is more accurate than other two approaches.

  4. Segmentation by fusion of histogram-based k-means clusters in different color spaces.

    PubMed

    Mignotte, Max

    2008-05-01

    This paper presents a new, simple, and efficient segmentation approach, based on a fusion procedure which aims at combining several segmentation maps associated to simpler partition models in order to finally get a more reliable and accurate segmentation result. The different label fields to be fused in our application are given by the same and simple (K-means based) clustering technique on an input image expressed in different color spaces. Our fusion strategy aims at combining these segmentation maps with a final clustering procedure using as input features, the local histogram of the class labels, previously estimated and associated to each site and for all these initial partitions. This fusion framework remains simple to implement, fast, general enough to be applied to various computer vision applications (e.g., motion detection and segmentation), and has been successfully applied on the Berkeley image database. The experiments herein reported in this paper illustrate the potential of this approach compared to the state-of-the-art segmentation methods recently proposed in the literature.

  5. Discriminative parameter estimation for random walks segmentation.

    PubMed

    Baudin, Pierre-Yves; Goodman, Danny; Kumrnar, Puneet; Azzabou, Noura; Carlier, Pierre G; Paragios, Nikos; Kumar, M Pawan

    2013-01-01

    The Random Walks (RW) algorithm is one of the most efficient and easy-to-use probabilistic segmentation methods. By combining contrast terms with prior terms, it provides accurate segmentations of medical images in a fully automated manner. However, one of the main drawbacks of using the RW algorithm is that its parameters have to be hand-tuned. we propose a novel discriminative learning framework that estimates the parameters using a training dataset. The main challenge we face is that the training samples are not fully supervised. Specifically, they provide a hard segmentation of the images, instead of a probabilistic segmentation. We overcome this challenge by treating the optimal probabilistic segmentation that is compatible with the given hard segmentation as a latent variable. This allows us to employ the latent support vector machine formulation for parameter estimation. We show that our approach significantly outperforms the baseline methods on a challenging dataset consisting of real clinical 3D MRI volumes of skeletal muscles.

  6. Comprehensive Detection of Gas Plumes from Multibeam Water Column Images with Minimisation of Noise Interferences

    PubMed Central

    Zhao, Jianhu; Zhang, Hongmei; Wang, Shiqi

    2017-01-01

    Multibeam echosounder systems (MBES) can record backscatter strengths of gas plumes in the water column (WC) images that may be an indicator of possible occurrence of gas at certain depths. Manual or automatic detection is generally adopted in finding gas plumes, but frequently results in low efficiency and high false detection rates because of WC images that are polluted by noise. To improve the efficiency and reliability of the detection, a comprehensive detection method is proposed in this paper. In the proposed method, the characteristics of WC background noise are first analyzed and given. Then, the mean standard deviation threshold segmentations are respectively used for the denoising of time-angle and depth-angle images, an intersection operation is performed for the two segmented images to further weaken noise in the WC data, and the gas plumes in the WC data are detected from the intersection image by the morphological constraint. The proposed method was tested by conducting shallow-water and deepwater experiments. In these experiments, the detections were conducted automatically and higher correct detection rates than the traditional methods were achieved. The performance of the proposed method is analyzed and discussed. PMID:29186014

  7. Dynamic programming-based hot spot identification approach for pedestrian crashes.

    PubMed

    Medury, Aditya; Grembek, Offer

    2016-08-01

    Network screening techniques are widely used by state agencies to identify locations with high collision concentration, also referred to as hot spots. However, most of the research in this regard has focused on identifying highway segments that are of concern to automobile collisions. In comparison, pedestrian hot spot detection has typically focused on analyzing pedestrian crashes in specific locations, such as at/near intersections, mid-blocks, and/or other crossings, as opposed to long stretches of roadway. In this context, the efficiency of the some of the widely used network screening methods has not been tested. Hence, in order to address this issue, a dynamic programming-based hot spot identification approach is proposed which provides efficient hot spot definitions for pedestrian crashes. The proposed approach is compared with the sliding window method and an intersection buffer-based approach. The results reveal that the dynamic programming method generates more hot spots with a higher number of crashes, while providing small hot spot segment lengths. In comparison, the sliding window method is shown to suffer from shortcomings due to a first-come-first-serve approach vis-à-vis hot spot identification and a fixed hot spot window length assumption. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Motor Impairment Evaluation for Upper Limb in Stroke Patients on the Basis of a Microsensor

    ERIC Educational Resources Information Center

    Huang, Shuai; Luo, Chun; Ye, Shiwei; Liu, Fei; Xie, Bin; Wang, Caifeng; Yang, Li; Huang, Zhen; Wu, Jiankang

    2012-01-01

    There has been an urgent need for an effective and efficient upper limb rehabilitation method for poststroke patients. We present a Micro-Sensor-based Upper Limb rehabilitation System for poststroke patients. The wearable motion capture units are attached to upper limb segments embedded in the fabric of garments. The body segment orientation…

  9. A region-based segmentation of tumour from brain CT images using nonlinear support vector machine classifier.

    PubMed

    Nanthagopal, A Padma; Rajamony, R Sukanesh

    2012-07-01

    The proposed system provides new textural information for segmenting tumours, efficiently and accurately and with less computational time, from benign and malignant tumour images, especially in smaller dimensions of tumour regions of computed tomography (CT) images. Region-based segmentation of tumour from brain CT image data is an important but time-consuming task performed manually by medical experts. The objective of this work is to segment brain tumour from CT images using combined grey and texture features with new edge features and nonlinear support vector machine (SVM) classifier. The selected optimal features are used to model and train the nonlinear SVM classifier to segment the tumour from computed tomography images and the segmentation accuracies are evaluated for each slice of the tumour image. The method is applied on real data of 80 benign, malignant tumour images. The results are compared with the radiologist labelled ground truth. Quantitative analysis between ground truth and the segmented tumour is presented in terms of segmentation accuracy and the overlap similarity measure dice metric. From the analysis and performance measures such as segmentation accuracy and dice metric, it is inferred that better segmentation accuracy and higher dice metric are achieved with the normalized cut segmentation method than with the fuzzy c-means clustering method.

  10. Breast density quantification with cone-beam CT: A post-mortem study

    PubMed Central

    Johnson, Travis; Ding, Huanjun; Le, Huy Q.; Ducote, Justin L.; Molloi, Sabee

    2014-01-01

    Forty post-mortem breasts were imaged with a flat-panel based cone-beam x-ray CT system at 50 kVp. The feasibility of breast density quantification has been investigated using standard histogram thresholding and an automatic segmentation method based on the fuzzy c-means algorithm (FCM). The breasts were chemically decomposed into water, lipid, and protein immediately after image acquisition was completed. The percent fibroglandular volume (%FGV) from chemical analysis was used as the gold standard for breast density comparison. Both image-based segmentation techniques showed good precision in breast density quantification with high linear coefficients between the right and left breast of each pair. When comparing with the gold standard using %FGV from chemical analysis, Pearson’s r-values were estimated to be 0.983 and 0.968 for the FCM clustering and the histogram thresholding techniques, respectively. The standard error of the estimate (SEE) was also reduced from 3.92% to 2.45% by applying the automatic clustering technique. The results of the postmortem study suggested that breast tissue can be characterized in terms of water, lipid and protein contents with high accuracy by using chemical analysis, which offers a gold standard for breast density studies comparing different techniques. In the investigated image segmentation techniques, the FCM algorithm had high precision and accuracy in breast density quantification. In comparison to conventional histogram thresholding, it was more efficient and reduced inter-observer variation. PMID:24254317

  11. Automatic Cell Segmentation in Fluorescence Images of Confluent Cell Monolayers Using Multi-object Geometric Deformable Model.

    PubMed

    Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L

    2013-03-13

    With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.

  12. Optimization of the short-circuit current in an InP nanowire array solar cell through opto-electronic modeling.

    PubMed

    Chen, Yang; Kivisaari, Pyry; Pistol, Mats-Erik; Anttu, Nicklas

    2016-09-23

    InP nanowire arrays with axial p-i-n junctions are promising devices for next-generation photovoltaics, with a demonstrated efficiency of 13.8%. However, the short-circuit current in such arrays does not match their absorption performance. Here, through combined optical and electrical modeling, we study how the absorption of photons and separation of the resulting photogenerated electron-hole pairs define and limit the short-circuit current in the nanowires. We identify how photogenerated minority carriers in the top n segment (i.e. holes) diffuse to the ohmic top contact where they recombine without contributing to the short-circuit current. In our modeling, such contact recombination can lead to a 60% drop in the short-circuit current. To hinder such hole diffusion, we include a gradient doping profile in the n segment to create a front surface barrier. This approach leads to a modest 5% increase in the short-circuit current, limited by Auger recombination with increased doping. A more efficient approach is to switch the n segment to a material with a higher band gap, like GaP. Then, a much smaller number of holes is photogenerated in the n segment, strongly limiting the amount that can diffuse and disappear into the top contact. For a 500 nm long top segment, the GaP approach leads to a 50% higher short-circuit current than with an InP top segment. Such a long top segment could facilitate the fabrication and contacting of nanowire array solar cells. Such design schemes for managing minority carriers could open the door to higher performance in single- and multi-junction nanowire-based solar cells.

  13. Designing image segmentation studies: Statistical power, sample size and reference standard quality.

    PubMed

    Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C

    2017-12-01

    Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Experimental Investigation of Heat Pipe Startup Under Reflux Mode

    NASA Technical Reports Server (NTRS)

    Ku, Jentung

    2018-01-01

    In the absence of body forces such as gravity, a heat pipe will start as soon as its evaporator temperature reaches the saturation temperature. If the heat pipe operates under a reflux mode in ground testing, the liquid puddle will fill the entire cross sectional area of the evaporator. Under this condition, the heat pipe may not start when the evaporator temperature reaches the saturation temperature. Instead, a superheat is required in order for the liquid to vaporize through nucleate boiling. The amount of superheat depends on several factors such as the roughness of the heat pipe internal surface and the gravity head. This paper describes an experimental investigation of the effect of gravity pressure head on the startup of a heat pipe under reflux mode. In this study, a heat pipe with internal axial grooves was placed in a vertical position with different tilt angles relative to the horizontal plane. Heat was applied to the evaporator at the bottom and cooling was provided to the condenser at the top. The liquid-flooded evaporator was divided into seven segments along the axial direction, and an electrical heater was attached to each evaporator segment. Heat was applied to individual heaters in various combinations and sequences. Other test variables included the condenser sink temperature and tilt angle. Test results show that as long as an individual evaporator segment was flooded with liquid initially, a superheat was required to vaporize the liquid in that segment. The amount of superheat required for liquid vaporization was a function of gravity pressure head imposed on that evaporator segment and the initial temperature of the heat pipe. The most efficient and effective way to start the heat pipe was to apply a heat load with a high heat flux to the lowest segment of the evaporator.

  15. A robust and fast active contour model for image segmentation with intensity inhomogeneity

    NASA Astrophysics Data System (ADS)

    Ding, Keyan; Weng, Guirong

    2018-04-01

    In this paper, a robust and fast active contour model is proposed for image segmentation in the presence of intensity inhomogeneity. By introducing the local image intensities fitting functions before the evolution of curve, the proposed model can effectively segment images with intensity inhomogeneity. And the computation cost is low because the fitting functions do not need to be updated in each iteration. Experiments have shown that the proposed model has a higher segmentation efficiency compared to some well-known active contour models based on local region fitting energy. In addition, the proposed model is robust to initialization, which allows the initial level set function to be a small constant function.

  16. Video-based noncooperative iris image segmentation.

    PubMed

    Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig

    2011-02-01

    In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.

  17. Knowledge-based low-level image analysis for computer vision systems

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.; Baxi, Himanshu; Ranganath, M. V.

    1988-01-01

    Two algorithms for entry-level image analysis and preliminary segmentation are proposed which are flexible enough to incorporate local properties of the image. The first algorithm involves pyramid-based multiresolution processing and a strategy to define and use interlevel and intralevel link strengths. The second algorithm, which is designed for selected window processing, extracts regions adaptively using local histograms. The preliminary segmentation and a set of features are employed as the input to an efficient rule-based low-level analysis system, resulting in suboptimal meaningful segmentation.

  18. Comparison of anatomy-based, fluence-based and aperture-based treatment planning approaches for VMAT

    NASA Astrophysics Data System (ADS)

    Rao, Min; Cao, Daliang; Chen, Fan; Ye, Jinsong; Mehta, Vivek; Wong, Tony; Shepard, David

    2010-11-01

    Volumetric modulated arc therapy (VMAT) has the potential to reduce treatment times while producing comparable or improved dose distributions relative to fixed-field intensity-modulated radiation therapy. In order to take full advantage of the VMAT delivery technique, one must select a robust inverse planning tool. The purpose of this study was to evaluate the effectiveness and efficiency of VMAT planning techniques of three categories: anatomy-based, fluence-based and aperture-based inverse planning. We have compared these techniques in terms of the plan quality, planning efficiency and delivery efficiency. Fourteen patients were selected for this study including six head-and-neck (HN) cases, and two cases each of prostate, pancreas, lung and partial brain. For each case, three VMAT plans were created. The first VMAT plan was generated based on the anatomical geometry. In the Elekta ERGO++ treatment planning system (TPS), segments were generated based on the beam's eye view (BEV) of the target and the organs at risk. The segment shapes were then exported to Pinnacle3 TPS followed by segment weight optimization and final dose calculation. The second VMAT plan was generated by converting optimized fluence maps (calculated by the Pinnacle3 TPS) into deliverable arcs using an in-house arc sequencer. The third VMAT plan was generated using the Pinnacle3 SmartArc IMRT module which is an aperture-based optimization method. All VMAT plans were delivered using an Elekta Synergy linear accelerator and the plan comparisons were made in terms of plan quality and delivery efficiency. The results show that for cases of little or modest complexity such as prostate, pancreas, lung and brain, the anatomy-based approach provides similar target coverage and critical structure sparing, but less conformal dose distributions as compared to the other two approaches. For more complex HN cases, the anatomy-based approach is not able to provide clinically acceptable VMAT plans while highly conformal dose distributions were obtained using both aperture-based and fluence-based inverse planning techniques. The aperture-based approach provides improved dose conformity than the fluence-based technique in complex cases.

  19. Transport of Escherichia coli in 25 m quartz sand columns

    NASA Astrophysics Data System (ADS)

    Lutterodt, G.; Foppen, J. W. A.; Maksoud, A.; Uhlenbrook, S.

    2011-01-01

    To help improve the prediction of bacteria travel distances in aquifers laboratory experiments were conducted to measure the distant dependent sticking efficiencies of two low attaching Escherichia coli strains (UCFL-94 and UCFL-131). The experimental set up consisted of a 25 m long helical column with a diameter of 3.2 cm packed with 99.1% pure-quartz sand saturated with a solution of magnesium sulfate and calcium chloride. Bacteria mass breakthrough at sampling distances ranging from 6 to 25.65 m were observed to quantify bacteria attachment over total transport distances ( αL) and sticking efficiencies at large intra-column segments ( αi) (> 5 m). Fractions of cells retained ( Fi) in a column segment as a function of αi were fitted with a power-law distribution from which the minimum sticking efficiency defined as the sticking efficiency of 0.001% bacteria fraction of the total input mass retained that results in a 5 log removal were extrapolated. Low values of αL in the order 10 - 4 and 10 - 3 were obtained for UCFL-94 and UCFL-131 respectively, while αi-values ranged between 10 - 6 to 10 - 3 for UCFL-94 and 10 - 5 to 10 - 4 for UCFL-131. In addition, both αL and αi reduced with increasing transport distance, and high coefficients of determination (0.99) were obtained for power-law distributions of αi for the two strains. Minimum sticking efficiencies extrapolated were 10 - 7 and 10 - 8 for UCFL-94 and UCFL-131, respectively. Fractions of cells exiting the column were 0.19 and 0.87 for UCFL-94 and UCL-131, respectively. We concluded that environmentally realistic sticking efficiency values in the order of 10 - 4 and 10 - 3 and much lower sticking efficiencies in the order 10 - 5 are measurable in the laboratory, Also power-law distributions in sticking efficiencies commonly observed for limited intra-column distances (< 2 m) are applicable at large transport distances(> 6 m) in columns packed with quartz grains. High fractions of bacteria populations may possess the so-called minimum sticking efficiency, thus expressing their ability to be transported over distances longer than what might be predicted using measured sticking efficiencies from experiments with both short (< 1 m) and long columns (> 25 m). Also variable values of sticking efficiencies within and among the strains show heterogeneities possibly due to variations in cell surface characteristics of the strains. The low sticking efficiency values measured express the importance of the long columns used in the experiments and the lower values of extrapolated minimum sticking efficiencies makes the method a valuable tool in delineating protection areas in real-world scenarios.

  20. Coronary arteries segmentation based on the 3D discrete wavelet transform and 3D neutrosophic transform.

    PubMed

    Chen, Shuo-Tsung; Wang, Tzung-Dau; Lee, Wen-Jeng; Huang, Tsai-Wei; Hung, Pei-Kai; Wei, Cheng-Yu; Chen, Chung-Ming; Kung, Woon-Man

    2015-01-01

    Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.

  1. Bilayer segmentation of webcam videos using tree-based classifiers.

    PubMed

    Yin, Pei; Criminisi, Antonio; Winn, John; Essa, Irfan

    2011-01-01

    This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems.

  2. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation.

    PubMed

    Kamnitsas, Konstantinos; Ledig, Christian; Newcombe, Virginia F J; Simpson, Joanna P; Kane, Andrew D; Menon, David K; Rueckert, Daniel; Glocker, Ben

    2017-02-01

    We propose a dual pathway, 11-layers deep, three-dimensional Convolutional Neural Network for the challenging task of brain lesion segmentation. The devised architecture is the result of an in-depth analysis of the limitations of current networks proposed for similar applications. To overcome the computational burden of processing 3D medical scans, we have devised an efficient and effective dense training scheme which joins the processing of adjacent image patches into one pass through the network while automatically adapting to the inherent class imbalance present in the data. Further, we analyze the development of deeper, thus more discriminative 3D CNNs. In order to incorporate both local and larger contextual information, we employ a dual pathway architecture that processes the input images at multiple scales simultaneously. For post-processing of the network's soft segmentation, we use a 3D fully connected Conditional Random Field which effectively removes false positives. Our pipeline is extensively evaluated on three challenging tasks of lesion segmentation in multi-channel MRI patient data with traumatic brain injuries, brain tumours, and ischemic stroke. We improve on the state-of-the-art for all three applications, with top ranking performance on the public benchmarks BRATS 2015 and ISLES 2015. Our method is computationally efficient, which allows its adoption in a variety of research and clinical settings. The source code of our implementation is made publicly available. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Prostate segmentation by sparse representation based classification

    PubMed Central

    Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2012-01-01

    Purpose: The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. Methods: To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by the elastic net in order to obtain a smooth and clear prostate boundary in the classification result. (3) Residue-based linear regression is incorporated to improve the classification performance and to extend SRC from hard classification to soft classification. (4) Iterative SRC is proposed by using context information to iteratively refine the classification results. Results: The proposed method has been comprehensively evaluated on a dataset consisting of 330 CT images from 24 patients. The effectiveness of the extended SRC has been validated by comparing it with the traditional SRC based on the proposed four extensions. The experimental results show that our extended SRC can obtain not only more accurate classification results but also smoother and clearer prostate boundary than the traditional SRC. Besides, the comparison with other five state-of-the-art prostate segmentation methods indicates that our method can achieve better performance than other methods under comparison. Conclusions: The authors have proposed a novel prostate segmentation method based on the sparse representation based classification, which can achieve considerably accurate segmentation results in CT prostate segmentation. PMID:23039673

  4. Prostate segmentation by sparse representation based classification.

    PubMed

    Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2012-10-01

    The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by the elastic net in order to obtain a smooth and clear prostate boundary in the classification result. (3) Residue-based linear regression is incorporated to improve the classification performance and to extend SRC from hard classification to soft classification. (4) Iterative SRC is proposed by using context information to iteratively refine the classification results. The proposed method has been comprehensively evaluated on a dataset consisting of 330 CT images from 24 patients. The effectiveness of the extended SRC has been validated by comparing it with the traditional SRC based on the proposed four extensions. The experimental results show that our extended SRC can obtain not only more accurate classification results but also smoother and clearer prostate boundary than the traditional SRC. Besides, the comparison with other five state-of-the-art prostate segmentation methods indicates that our method can achieve better performance than other methods under comparison. The authors have proposed a novel prostate segmentation method based on the sparse representation based classification, which can achieve considerably accurate segmentation results in CT prostate segmentation.

  5. A difference-matrix metaheuristic for intensity map segmentation in step-and-shoot IMRT delivery.

    PubMed

    Gunawardena, Athula D A; D'Souza, Warren D; Goadrich, Laura D; Meyer, Robert R; Sorensen, Kelly J; Naqvi, Shahid A; Shi, Leyuan

    2006-05-21

    At an intermediate stage of radiation treatment planning for IMRT, most commercial treatment planning systems for IMRT generate intensity maps that describe the grid of beamlet intensities for each beam angle. Intensity map segmentation of the matrix of individual beamlet intensities into a set of MLC apertures and corresponding intensities is then required in order to produce an actual radiation delivery plan for clinical use. Mathematically, this is a very difficult combinatorial optimization problem, especially when mechanical limitations of the MLC lead to many constraints on aperture shape, and setup times for apertures make the number of apertures an important factor in overall treatment time. We have developed, implemented and tested on clinical cases a metaheuristic (that is, a method that provides a framework to guide the repeated application of another heuristic) that efficiently generates very high-quality (low aperture number) segmentations. Our computational results demonstrate that the number of beam apertures and monitor units in the treatment plans resulting from our approach is significantly smaller than the corresponding values for treatment plans generated by the heuristics embedded in a widely use commercial system. We also contrast the excellent results of our fast and robust metaheuristic with results from an 'exact' method, branch-and-cut, which attempts to construct optimal solutions, but, within clinically acceptable time limits, generally fails to produce good solutions, especially for intensity maps with more than five intensity levels. Finally, we show that in no instance is there a clinically significant change of quality associated with our more efficient plans.

  6. Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance.

    PubMed

    Yuan, Yading; Chao, Ming; Lo, Yeh-Chi

    2017-09-01

    Automatic skin lesion segmentation in dermoscopic images is a challenging task due to the low contrast between lesion and the surrounding skin, the irregular and fuzzy lesion borders, the existence of various artifacts, and various imaging acquisition conditions. In this paper, we present a fully automatic method for skin lesion segmentation by leveraging 19-layer deep convolutional neural networks that is trained end-to-end and does not rely on prior knowledge of the data. We propose a set of strategies to ensure effective and efficient learning with limited training data. Furthermore, we design a novel loss function based on Jaccard distance to eliminate the need of sample re-weighting, a typical procedure when using cross entropy as the loss function for image segmentation due to the strong imbalance between the number of foreground and background pixels. We evaluated the effectiveness, efficiency, as well as the generalization capability of the proposed framework on two publicly available databases. One is from ISBI 2016 skin lesion analysis towards melanoma detection challenge, and the other is the PH2 database. Experimental results showed that the proposed method outperformed other state-of-the-art algorithms on these two databases. Our method is general enough and only needs minimum pre- and post-processing, which allows its adoption in a variety of medical image segmentation tasks.

  7. Segmentation of Pollen Tube Growth Videos Using Dynamic Bi-Modal Fusion and Seam Carving.

    PubMed

    Tambo, Asongu L; Bhanu, Bir

    2016-05-01

    The growth of pollen tubes is of significant interest in plant cell biology, as it provides an understanding of internal cell dynamics that affect observable structural characteristics such as cell diameter, length, and growth rate. However, these parameters can only be measured in experimental videos if the complete shape of the cell is known. The challenge is to accurately obtain the cell boundary in noisy video images. Usually, these measurements are performed by a scientist who manually draws regions-of-interest on the images displayed on a computer screen. In this paper, a new automated technique is presented for boundary detection by fusing fluorescence and brightfield images, and a new efficient method of obtaining the final cell boundary through the process of Seam Carving is proposed. This approach takes advantage of the nature of the fusion process and also the shape of the pollen tube to efficiently search for the optimal cell boundary. In video segmentation, the first two frames are used to initialize the segmentation process by creating a search space based on a parametric model of the cell shape. Updates to the search space are performed based on the location of past segmentations and a prediction of the next segmentation.Experimental results show comparable accuracy to a previous method, but significant decrease in processing time. This has the potential for real time applications in pollen tube microscopy.

  8. Kidney outer medulla mitochondria are more efficient compared to cortex mitochondria as a strategy to sustain ATP production in a suboptimal environment.

    PubMed

    Schiffer, Tomas A; Gustafsson, Håkan; Palm, Fredrik

    2018-05-30

    The kidneys receive approximately 25% of cardiac output, which is a prerequisite in order to maintain sufficient glomerular filtration rate. However, both intrarenal regional renal blood flow and tissue oxygen levels are heterogeneous with decreasing levels in the inner part of the medulla. These differences in combination with the heterogeneous metabolic activity of the different nephron segment located in the different parts of the kidney may constitute a functional problem when challenged. The proximal tubule and the medullary thick ascending limb of Henle are considered to have the highest metabolic rate, which is relating to the high mitochondria content needed to sustain sufficient ATP production from oxidative phosphorylation in order to support high electrolyte transport activity in these nephron segments. Interestingly, the cells located in kidney medulla functions at the verge of hypoxia and the mitochondria may have adapted to the surrounding environment. However, little is known about intrarenal differences in mitochondria function. We therefore investigated functional differences between mitochondria isolated from kidney cortex and medulla of healthy normoglycemic rats were estimated using high-resolution respirometry. The results demonstrate that medullary mitochondria had a higher degree of coupling, are more efficient and have higher oxygen affinity, which would make them more suitable to function in an environment with limited oxygen supply. Furthermore, these results support the hypothesis that mitochondria of medullary cells have adapted to the normal hypoxic in vivo situation as a strategy of sustaining ATP production in a suboptimal environment.

  9. Defect Detection and Segmentation Framework for Remote Field Eddy Current Sensor Data

    PubMed Central

    2017-01-01

    Remote-Field Eddy-Current (RFEC) technology is often used as a Non-Destructive Evaluation (NDE) method to prevent water pipe failures. By analyzing the RFEC data, it is possible to quantify the corrosion present in pipes. Quantifying the corrosion involves detecting defects and extracting their depth and shape. For large sections of pipelines, this can be extremely time-consuming if performed manually. Automated approaches are therefore well motivated. In this article, we propose an automated framework to locate and segment defects in individual pipe segments, starting from raw RFEC measurements taken over large pipelines. The framework relies on a novel feature to robustly detect these defects and a segmentation algorithm applied to the deconvolved RFEC signal. The framework is evaluated using both simulated and real datasets, demonstrating its ability to efficiently segment the shape of corrosion defects. PMID:28984823

  10. Segmentation and pulse shape discrimination techniques for rejecting background in germanium detectors

    NASA Technical Reports Server (NTRS)

    Roth, J.; Primbsch, J. H.; Lin, R. P.

    1984-01-01

    The possibility of rejecting the internal beta-decay background in coaxial germanium detectors by distinguishing between the multi-site energy losses characteristic of photons and the single-site energy losses of electrons in the range 0.2 - 2 MeV is examined. The photon transport was modeled with a Monte Carlo routine. Background rejection by both multiple segmentation and pulse shape discrimination techniques is investigated. The efficiency of a six 1 cm-thick segment coaxial detector operating in coincidence mode alone is compared to that of a two-segment (1 cm and 5 cm) detector employing both front-rear coincidence and PSD in the rear segment to isolate photon events. Both techniques can provide at least 95 percent rejection of single-site events while accepting at least 80 percent of the multi-site events above 500 keV.

  11. A Stochastic-Variational Model for Soft Mumford-Shah Segmentation

    PubMed Central

    2006-01-01

    In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059

  12. Dynamic segment shared protection for multicast traffic in meshed wavelength-division-multiplexing optical networks

    NASA Astrophysics Data System (ADS)

    Liao, Luhua; Li, Lemin; Wang, Sheng

    2006-12-01

    We investigate the protection approach for dynamic multicast traffic under shared risk link group (SRLG) constraints in meshed wavelength-division-multiplexing optical networks. We present a shared protection algorithm called dynamic segment shared protection for multicast traffic (DSSPM), which can dynamically adjust the link cost according to the current network state and can establish a primary light-tree as well as corresponding SRLG-disjoint backup segments for a dependable multicast connection. A backup segment can efficiently share the wavelength capacity of its working tree and the common resources of other backup segments based on SRLG-disjoint constraints. The simulation results show that DSSPM not only can protect the multicast sessions against a single-SRLG breakdown, but can make better use of the wavelength resources and also lower the network blocking probability.

  13. Defect Detection and Segmentation Framework for Remote Field Eddy Current Sensor Data.

    PubMed

    Falque, Raphael; Vidal-Calleja, Teresa; Miro, Jaime Valls

    2017-10-06

    Remote-Field Eddy-Current (RFEC) technology is often used as a Non-Destructive Evaluation (NDE) method to prevent water pipe failures. By analyzing the RFEC data, it is possible to quantify the corrosion present in pipes. Quantifying the corrosion involves detecting defects and extracting their depth and shape. For large sections of pipelines, this can be extremely time-consuming if performed manually. Automated approaches are therefore well motivated. In this article, we propose an automated framework to locate and segment defects in individual pipe segments, starting from raw RFEC measurements taken over large pipelines. The framework relies on a novel feature to robustly detect these defects and a segmentation algorithm applied to the deconvolved RFEC signal. The framework is evaluated using both simulated and real datasets, demonstrating its ability to efficiently segment the shape of corrosion defects.

  14. Detecting the changes in rural communities in Taiwan by applying multiphase segmentation on FORMOSA-2 satellite imagery

    NASA Astrophysics Data System (ADS)

    Huang, Yishuo

    2015-09-01

    Agricultural activities mainly occur in rural areas; recently, ecological conservation and biological diversity are being emphasized in rural communities to promote sustainable development for rural communities, especially for rural communities in Taiwan. Therefore, since 2005, many rural communities in Taiwan have compiled their own development strategies in order to create their own unique characteristics to attract people to visit and stay in rural communities. By implementing these strategies, young people can stay in their own rural communities and the rural communities are rejuvenated. However, some rural communities introduce artificial construction into the community such that the ecological and biological environments are significantly degraded. The strategies need to be efficiently monitored because up to 67 rural communities have proposed rejuvenation projects. In 2015, up to 440 rural communities were estimated to be involved in rural community rejuvenations. How to monitor the changes occurring in those rural communities participating in rural community rejuvenation such that ecological conservation and ecological diversity can be satisfied is an important issue in rural community management. Remote sensing provides an efficient and rapid method to achieve this issue. Segmentation plays a fundamental role in human perception. In this respect, segmentation can be used as the process of transforming the collection of pixels of an image into a group of regions or objects with meaning. This paper proposed an algorithm based on the multiphase approach to segment the normalized difference vegetation index, NDVI, of the rural communities into several sub-regions, and to have the NDVI distribution in each sub-region be homogeneous. Those regions whose values of NDVI are close will be merged into the same class. In doing so, a complex NDVI map can be simplified into two groups: the high and low values of NDVI. The class with low NDVI values corresponds to those regions containing roads, buildings, and other manmade construction works and the class with high values of NDVI indicates that those regions contain vegetation in good health. In order to verify the processed results, the regional boundaries were extracted and laid down on the given images to check whether the extracted boundaries were laid down on buildings, roads, or other artificial constructions. In addition to the proposed approach, another approach called statistical region merging was employed by grouping sets of pixels with homogeneous properties such that those sets are iteratively grown by combining smaller regions or pixels. In doing so, the segmented NDVI map can be generated. By comparing the areas of the merged classes in different years, the changes occurring in the rural communities of Taiwan can be detected. The satellite imagery of FORMOSA-2 with 2-m ground resolution is employed to evaluate the performance of the proposed approach. The satellite imagery of two rural communities (Jhumen and Taomi communities) is chosen to evaluate environmental changes between 2005 and 2010. The change maps of 2005-2010 show that a high density of green on a patch of land is increased by 19.62 ha in Jhumen community and conversely a similar patch of land is significantly decreased by 236.59 ha in Taomi community. Furthermore, the change maps created by another image segmentation method called statistical region merging generate similar processed results to multiphase segmentation.

  15. Clustering approach for unsupervised segmentation of malarial Plasmodium vivax parasite

    NASA Astrophysics Data System (ADS)

    Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Mohamed, Zeehaida

    2017-10-01

    Malaria is a global health problem, particularly in Africa and south Asia where it causes countless deaths and morbidity cases. Efficient control and prompt of this disease require early detection and accurate diagnosis due to the large number of cases reported yearly. To achieve this aim, this paper proposes an image segmentation approach via unsupervised pixel segmentation of malaria parasite to automate the diagnosis of malaria. In this study, a modified clustering algorithm namely enhanced k-means (EKM) clustering, is proposed for malaria image segmentation. In the proposed EKM clustering, the concept of variance and a new version of transferring process for clustered members are used to assist the assignation of data to the proper centre during the process of clustering, so that good segmented malaria image can be generated. The effectiveness of the proposed EKM clustering has been analyzed qualitatively and quantitatively by comparing this algorithm with two popular image segmentation techniques namely Otsu's thresholding and k-means clustering. The experimental results show that the proposed EKM clustering has successfully segmented 100 malaria images of P. vivax species with segmentation accuracy, sensitivity and specificity of 99.20%, 87.53% and 99.58%, respectively. Hence, the proposed EKM clustering can be considered as an image segmentation tool for segmenting the malaria images.

  16. Color image segmentation to detect defects on fresh ham

    NASA Astrophysics Data System (ADS)

    Marty-Mahe, Pascale; Loisel, Philippe; Brossard, Didier

    2003-04-01

    We present in this paper the color segmentation methods that were used to detect appearance defects on 3 dimensional shape of fresh ham. The use of color histograms turned out to be an efficient solution to characterize the healthy skin, but a special care must be taken to choose the color components because of the 3 dimensional shape of ham.

  17. Optimization of GaAs Nanowire Pin Junction Array Solar Cells by Using AlGaAs/GaAs Heterojunctions

    NASA Astrophysics Data System (ADS)

    Wu, Yao; Yan, Xin; Wei, Wei; Zhang, Jinnan; Zhang, Xia; Ren, Xiaomin

    2018-04-01

    We optimized the performance of GaAs nanowire pin junction array solar cells by introducing AlGaAs/GaAs heterejunctions. AlGaAs is used for the p type top segment for axial junctions and the p type outer shell for radial junctions. The AlGaAs not only serves as passivation layers for GaAs nanowires but also confines the optical generation in the active regions, reducing the recombination loss in heavily doped regions and the minority carrier recombination at the top contact. The results show that the conversion efficiency of GaAs nanowires can be greatly enhanced by using AlGaAs for the p segment instead of GaAs. A maximum efficiency enhancement of 8.42% has been achieved in this study. And for axial nanowire, by using AlGaAs for the top p segment, a relatively long top segment can be employed without degenerating device performance, which could facilitate the fabrication and contacting of nanowire array solar cells. While for radial nanowires, AlGaAs/GaAs nanowires show better tolerance to p-shell thickness and surface condition.

  18. Stochastic modeling of soundtrack for efficient segmentation and indexing of video

    NASA Astrophysics Data System (ADS)

    Naphade, Milind R.; Huang, Thomas S.

    1999-12-01

    Tools for efficient and intelligent management of digital content are essential for digital video data management. An extremely challenging research area in this context is that of multimedia analysis and understanding. The capabilities of audio analysis in particular for video data management are yet to be fully exploited. We present a novel scheme for indexing and segmentation of video by analyzing the audio track. This analysis is then applied to the segmentation and indexing of movies. We build models for some interesting events in the motion picture soundtrack. The models built include music, human speech and silence. We propose the use of hidden Markov models to model the dynamics of the soundtrack and detect audio-events. Using these models we segment and index the soundtrack. A practical problem in motion picture soundtracks is that the audio in the track is of a composite nature. This corresponds to the mixing of sounds from different sources. Speech in foreground and music in background are common examples. The coexistence of multiple individual audio sources forces us to model such events explicitly. Experiments reveal that explicit modeling gives better result than modeling individual audio events separately.

  19. Optimization of GaAs Nanowire Pin Junction Array Solar Cells by Using AlGaAs/GaAs Heterojunctions.

    PubMed

    Wu, Yao; Yan, Xin; Wei, Wei; Zhang, Jinnan; Zhang, Xia; Ren, Xiaomin

    2018-04-25

    We optimized the performance of GaAs nanowire pin junction array solar cells by introducing AlGaAs/GaAs heterejunctions. AlGaAs is used for the p type top segment for axial junctions and the p type outer shell for radial junctions. The AlGaAs not only serves as passivation layers for GaAs nanowires but also confines the optical generation in the active regions, reducing the recombination loss in heavily doped regions and the minority carrier recombination at the top contact. The results show that the conversion efficiency of GaAs nanowires can be greatly enhanced by using AlGaAs for the p segment instead of GaAs. A maximum efficiency enhancement of 8.42% has been achieved in this study. And for axial nanowire, by using AlGaAs for the top p segment, a relatively long top segment can be employed without degenerating device performance, which could facilitate the fabrication and contacting of nanowire array solar cells. While for radial nanowires, AlGaAs/GaAs nanowires show better tolerance to p-shell thickness and surface condition.

  20. Hierarchical extraction of urban objects from mobile laser scanning data

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen; Zhao, Gang; Dai, Wenxia

    2015-01-01

    Point clouds collected in urban scenes contain a huge number of points (e.g., billions), numerous objects with significant size variability, complex and incomplete structures, and variable point densities, raising great challenges for the automated extraction of urban objects in the field of photogrammetry, computer vision, and robotics. This paper addresses these challenges by proposing an automated method to extract urban objects robustly and efficiently. The proposed method generates multi-scale supervoxels from 3D point clouds using the point attributes (e.g., colors, intensities) and spatial distances between points, and then segments the supervoxels rather than individual points by combining graph based segmentation with multiple cues (e.g., principal direction, colors) of the supervoxels. The proposed method defines a set of rules for merging segments into meaningful units according to types of urban objects and forms the semantic knowledge of urban objects for the classification of objects. Finally, the proposed method extracts and classifies urban objects in a hierarchical order ranked by the saliency of the segments. Experiments show that the proposed method is efficient and robust for extracting buildings, streetlamps, trees, telegraph poles, traffic signs, cars, and enclosures from mobile laser scanning (MLS) point clouds, with an overall accuracy of 92.3%.

  1. Review of smoothing methods for enhancement of noisy data from heavy-duty LHD mining machines

    NASA Astrophysics Data System (ADS)

    Wodecki, Jacek; Michalak, Anna; Stefaniak, Paweł

    2018-01-01

    Appropriate analysis of data measured on heavy-duty mining machines is essential for processes monitoring, management and optimization. Some particular classes of machines, for example LHD (load-haul-dump) machines, hauling trucks, drilling/bolting machines etc. are characterized with cyclicity of operations. In those cases, identification of cycles and their segments or in other words - simply data segmentation is a key to evaluate their performance, which may be very useful from the management point of view, for example leading to introducing optimization to the process. However, in many cases such raw signals are contaminated with various artifacts, and in general are expected to be very noisy, which makes the segmentation task very difficult or even impossible. To deal with that problem, there is a need for efficient smoothing methods that will allow to retain informative trends in the signals while disregarding noises and other undesired non-deterministic components. In this paper authors present a review of various approaches to diagnostic data smoothing. Described methods can be used in a fast and efficient way, effectively cleaning the signals while preserving informative deterministic behaviour, that is a crucial to precise segmentation and other approaches to industrial data analysis.

  2. Multi-scale image segmentation and numerical modeling in carbonate rocks

    NASA Astrophysics Data System (ADS)

    Alves, G. C.; Vanorio, T.

    2016-12-01

    Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.

  3. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    PubMed Central

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  4. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    PubMed

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  5. Peculiarities of the Third Natural Frequency Vibrations of a Cantilever for the Improvement of Energy Harvesting

    PubMed Central

    Ostasevicius, Vytautas; Janusas, Giedrius; Milasauskaite, Ieva; Zilys, Mindaugas; Kizauskiene, Laura

    2015-01-01

    This paper focuses on several aspects extending the dynamical efficiency of a cantilever beam vibrating in the third mode. A few ways of producing this mode stimulation, namely vibro-impact or forced excitation, as well as its application for energy harvesting devices are proposed. The paper presents numerical and experimental analyses of novel structural dynamics effects along with an optimal configuration of the cantilever beam. The peculiarities of a cantilever beam vibrating in the third mode are related to the significant increase of the level of deformations capable of extracting significant additional amounts of energy compared to the conventional harvester vibrating in the first mode. Two types of a piezoelectric vibrating energy harvester (PVEH) prototype are analysed in this paper: the first one without electrode segmentation, while the second is segmented using electrode segmentation at the strain nodes of the third vibration mode to achieve effective operation at the third resonant frequency. The results of this research revealed that the voltage generated by any segment of the segmented PVEH prototype excited at the third resonant frequency demonstrated a 3.4–4.8-fold increase in comparison with the non-segmented prototype. Simultaneously, the efficiency of the energy harvester prototype also increased at lower resonant frequencies from 16% to 90%. The insights presented in the paper may serve for the development and fabrication of advanced piezoelectric energy harvesters which would be able to generate a considerably increased amount of electrical energy independently of the frequency of kinematical excitation. PMID:26029948

  6. SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, P; Mao, T; Gong, S

    2016-06-15

    Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimizationmore » trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less

  7. Investigation of performance deterioration of the CF6/JT9D, high-bypass ratio turbofan engines

    NASA Technical Reports Server (NTRS)

    Ziemianski, J. A.; Mehalic, C. M.

    1980-01-01

    The aircraft energy efficiency program within NASA is developing technology required to improve the fuel efficiency of commercial subsonic transport aricraft. One segment of this program includes engine diagnostics which is directed toward determining the sources and causes of performance deterioration in the Pratt and Whitney Aircraft JT9D and General Electric CF6 high-bypass ratio turbofan engines and developing technology for minimizing the performance losses. Results of engine performance deterioration investigations based on historical data, special engine tests, and specific tests to define the influence of flight loads and component clearances on performance are presented. The results of analysis of several damage mechanisms that contribute to performance deterioration such as blade tip rubs, airfoil surface roughness and erosion, and thermal distortion are also included. The significance of these damage mechanisms on component and overall engine performance is discussed.

  8. A Ka-band radial relativistic backward wave oscillator with GW-class output power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Jiaxin; Zhang, Xiaoping, E-mail: zhangxiaoping@nudt.edu.cn; Dang, Fangchao

    A novel radial relativistic backward wave oscillator with a reflector is proposed and designed to generate GW-level high power microwaves at Ka-band. The segmented radial slow wave structure and the reflector are matched to enhance interaction efficiency. We choose the volume wave TM{sub 01} mode as the working mode due to the volume wave characteristic. The main structural parameters of the novel device are optimized by particle-in-cell simulation. High power microwaves with power of 2 GW and a frequency of 29.4 GHz are generated with 30% efficiency when the electron beam voltage is 383 kV, the beam current is 17 kA, and themore » guiding magnetic field is only 0.6 T. Simultaneously, the highest electric field in the novel Ka-band device is just about 960 kV/cm in second slow wave structure.« less

  9. An automated retinal imaging method for the early diagnosis of diabetic retinopathy.

    PubMed

    Franklin, S Wilfred; Rajan, S Edward

    2013-01-01

    Diabetic retinopathy is a microvascular complication of long-term diabetes and is the major cause for eyesight loss due to changes in blood vessels of the retina. Major vision loss due to diabetic retinopathy is highly preventable with regular screening and timely intervention at the earlier stages. Retinal blood vessel segmentation methods help to identify the successive stages of such sight threatening diseases like diabetes. To develop and test a novel retinal imaging method which segments the blood vessels automatically from retinal images, which helps the ophthalmologists in the diagnosis and follow-up of diabetic retinopathy. This method segments each image pixel as vessel or nonvessel, which in turn, used for automatic recognition of the vasculature in retinal images. Retinal blood vessels were identified by means of a multilayer perceptron neural network, for which the inputs were derived from the Gabor and moment invariants-based features. Back propagation algorithm, which provides an efficient technique to change the weights in a feed forward network, is utilized in our method. Quantitative results of sensitivity, specificity and predictive values were obtained in our method and the measured accuracy of our segmentation algorithm was 95.3%, which is better than that presented by state-of-the-art approaches. The evaluation procedure used and the demonstrated effectiveness of our automated retinal imaging method proves itself as the most powerful tool to diagnose diabetic retinopathy in the earlier stages.

  10. Wind Evaluation Breadboard electronics and software

    NASA Astrophysics Data System (ADS)

    Núñez, Miguel; Reyes, Marcos; Viera, Teodora; Zuluaga, Pablo

    2008-07-01

    WEB, the Wind Evaluation Breadboard, is an Extremely Large Telescope Primary Mirror simulator, developed with the aim of quantifying the ability of a segmented primary mirror to cope with wind disturbances. This instrument supported by the European Community (Framework Programme 6, ELT Design Study), is developed by ESO, IAC, MEDIA-ALTRAN, JUPASA and FOGALE. The WEB is a bench of about 20 tons and 7 meter diameter emulating a segmented primary mirror and its cell, with 7 hexagonal segments simulators, including electromechanical support systems. In this paper we present the WEB central control electronics and the software development which has to interface with: position actuators, auxiliary slave actuators, edge sensors, azimuth ring, elevation actuator, meteorological station and air balloons enclosure. The set of subsystems to control is a reduced version of a real telescope segmented primary mirror control system with high real time performance but emphasizing on development time efficiency and flexibility, because WEB is a test bench. The paper includes a detailed description of hardware and software, paying special attention to real time performance. The Hardware is composed of three computers and the Software architecture has been divided in three intercommunicated applications and they have been implemented using Labview over Windows XP and Pharlap ETS real time operating system. The edge sensors and position actuators close loop has a sampling and commanding frequency of 1KHz.

  11. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slicemore » and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm, and 1.06 ± 0.40 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the premolar were 37.95 ± 10.13 mm{sup 3}, 92.45 ± 2.29%, 0.29 ± 0.06 mm, 0.33 ± 0.10 mm, and 1.28 ± 0.72 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the molar were 52.38 ± 17.27 mm{sup 3}, 94.12 ± 1.38%, 0.30 ± 0.08 mm, 0.35 ± 0.17 mm, and 1.52 ± 0.75 mm, respectively. The computation time of the proposed method for segmenting CBCT images of one subject was 7.25 ± 0.73 min. Compared with two other methods, the proposed method achieves significant improvement in terms of accuracy. Conclusions: The presented tooth segmentation method can be used to segment tooth contours from CT images accurately and efficiently.« less

  12. Optimization of Compton-suppression and summing schemes for the TIGRESS HPGe detector array

    NASA Astrophysics Data System (ADS)

    Schumaker, M. A.; Svensson, C. E.; Andreoiu, C.; Andreyev, A.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D.; Boston, A. J.; Chakrawarthy, R. S.; Churchman, R.; Drake, T. E.; Finlay, P.; Garrett, P. E.; Grinyer, G. F.; Hackman, G.; Hyland, B.; Jones, B.; Maharaj, R.; Morton, A. C.; Pearson, C. J.; Phillips, A. A.; Sarazin, F.; Scraggs, H. C.; Smith, M. B.; Valiente-Dobón, J. J.; Waddington, J. C.; Watters, L. M.

    2007-04-01

    Methods of optimizing the performance of an array of Compton-suppressed, segmented HPGe clover detectors have been developed which rely on the physical position sensitivity of both the HPGe crystals and the Compton-suppression shields. These relatively simple analysis procedures promise to improve the precision of experiments with the TRIUMF-ISAC Gamma-Ray Escape-Suppressed Spectrometer (TIGRESS). Suppression schemes will improve the efficiency and peak-to-total ratio of TIGRESS for high γ-ray multiplicity events by taking advantage of the 20-fold segmentation of the Compton-suppression shields, while the use of different summing schemes will improve results for a wide range of experimental conditions. The benefits of these methods are compared for many γ-ray energies and multiplicities using a GEANT4 simulation, and the optimal physical configuration of the TIGRESS array under each set of conditions is determined.

  13. Image analysis of ocular fundus for retinopathy characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ushizima, Daniela; Cuadros, Jorge

    2010-02-05

    Automated analysis of ocular fundus images is a common procedure in countries as England, including both nonemergency examination and retinal screening of patients with diabetes mellitus. This involves digital image capture and transmission of the images to a digital reading center for evaluation and treatment referral. In collaboration with the Optometry Department, University of California, Berkeley, we have tested computer vision algorithms to segment vessels and lesions in ground-truth data (DRIVE database) and hundreds of images of non-macular centric and nonuniform illumination views of the eye fundus from EyePACS program. Methods under investigation involve mathematical morphology (Figure 1) for imagemore » enhancement and pattern matching. Recently, we have focused in more efficient techniques to model the ocular fundus vasculature (Figure 2), using deformable contours. Preliminary results show accurate segmentation of vessels and high level of true-positive microaneurysms.« less

  14. Application of differential interference contrast with inverted microscopes to the in vitro perfused nephron.

    PubMed

    Horster, M; Gundlach, H

    1979-12-01

    The study of in vitro perfused individual nephron segments requires a microscope which provides: (1) easy access to the specimen for measurement of cellular solute flux and voltage; (2) an image with high resolution and contrast; (3) optical sectioning of the object at different levels; and (4) rapid recording of the morphological phenomena. This paper describes an example of commercially available apparatus meeting the above requirements, and illustrates its efficiency. The microscope is of the inverted type (Zeiss IM 35) equipped with differential-interference-contrast (DIC) with a long working distance, and an automatically controlled camera system. The microscopic image exhibits cellular and intercellular details in the unstained transporting mammalian nephron segments despite their tubular structure and great thickness and makes obvious function-structure correlations (e.g. cell volume changes); luminal and contraluminal cell borders are well resolved for controlled microelectrode impalement.

  15. Are the users of social networking sites homogeneous? A cross-cultural study.

    PubMed

    Alarcón-Del-Amo, María-Del-Carmen; Gómez-Borja, Miguel-Ángel; Lorenzo-Romero, Carlota

    2015-01-01

    The growing use of Social Networking Sites (SNS) around the world has made it necessary to understand individuals' behaviors within these sites according to different cultures. Based on a comparative study between two different European countries (The Netherlands versus Spain), a comparison of typologies of networked Internet users has been obtained through a latent segmentation approach. These typologies are based on the frequency with which users perform different activities, their socio-demographic variables, and experience in social networking and interaction patterns. The findings show new insights regarding international segmentation in order to analyse SNS user behaviors in both countries. These results are relevant for marketing strategists eager to use the communication potential of networked individuals and for marketers willing to explore the potential of online networking as a low cost and a highly efficient alternative to traditional networking approaches. For most businesses, expert users could be valuable opinion leaders and potential brand influencers.

  16. Are the users of social networking sites homogeneous? A cross-cultural study

    PubMed Central

    Alarcón-del-Amo, María-del-Carmen; Gómez-Borja, Miguel-Ángel; Lorenzo-Romero, Carlota

    2015-01-01

    The growing use of Social Networking Sites (SNS) around the world has made it necessary to understand individuals' behaviors within these sites according to different cultures. Based on a comparative study between two different European countries (The Netherlands versus Spain), a comparison of typologies of networked Internet users has been obtained through a latent segmentation approach. These typologies are based on the frequency with which users perform different activities, their socio-demographic variables, and experience in social networking and interaction patterns. The findings show new insights regarding international segmentation in order to analyse SNS user behaviors in both countries. These results are relevant for marketing strategists eager to use the communication potential of networked individuals and for marketers willing to explore the potential of online networking as a low cost and a highly efficient alternative to traditional networking approaches. For most businesses, expert users could be valuable opinion leaders and potential brand influencers. PMID:26321971

  17. Simultaneous Tumor Segmentation, Image Restoration, and Blur Kernel Estimation in PET Using Multiple Regularizations

    PubMed Central

    Li, Laquan; Wang, Jian; Lu, Wei; Tan, Shan

    2016-01-01

    Accurate tumor segmentation from PET images is crucial in many radiation oncology applications. Among others, partial volume effect (PVE) is recognized as one of the most important factors degrading imaging quality and segmentation accuracy in PET. Taking into account that image restoration and tumor segmentation are tightly coupled and can promote each other, we proposed a variational method to solve both problems simultaneously in this study. The proposed method integrated total variation (TV) semi-blind de-convolution and Mumford-Shah segmentation with multiple regularizations. Unlike many existing energy minimization methods using either TV or L2 regularization, the proposed method employed TV regularization over tumor edges to preserve edge information, and L2 regularization inside tumor regions to preserve the smooth change of the metabolic uptake in a PET image. The blur kernel was modeled as anisotropic Gaussian to address the resolution difference in transverse and axial directions commonly seen in a clinic PET scanner. The energy functional was rephrased using the Γ-convergence approximation and was iteratively optimized using the alternating minimization (AM) algorithm. The performance of the proposed method was validated on a physical phantom and two clinic datasets with non-Hodgkin’s lymphoma and esophageal cancer, respectively. Experimental results demonstrated that the proposed method had high performance for simultaneous image restoration, tumor segmentation and scanner blur kernel estimation. Particularly, the recovery coefficients (RC) of the restored images of the proposed method in the phantom study were close to 1, indicating an efficient recovery of the original blurred images; for segmentation the proposed method achieved average dice similarity indexes (DSIs) of 0.79 and 0.80 for two clinic datasets, respectively; and the relative errors of the estimated blur kernel widths were less than 19% in the transversal direction and 7% in the axial direction. PMID:28603407

  18. Characterization of Rift Valley Fever Virus MP-12 Strain Encoding NSs of Punta Toro Virus or Sandfly Fever Sicilian Virus

    PubMed Central

    Lihoradova, Olga A.; Indran, Sabarish V.; Kalveram, Birte; Lokugamage, Nandadeva; Head, Jennifer A.; Gong, Bin; Tigabu, Bersabeh; Juelich, Terry L.; Freiberg, Alexander N.; Ikegami, Tetsuro

    2013-01-01

    Rift Valley fever virus (RVFV; genus Phlebovirus, family Bunyaviridae) is a mosquito-borne zoonotic pathogen which can cause hemorrhagic fever, neurological disorders or blindness in humans, and a high rate of abortion in ruminants. MP-12 strain, a live-attenuated candidate vaccine, is attenuated in the M- and L-segments, but the S-segment retains the virulent phenotype. MP-12 was manufactured as an Investigational New Drug vaccine by using MRC-5 cells and encodes a functional NSs gene, the major virulence factor of RVFV which 1) induces a shutoff of the host transcription, 2) inhibits interferon (IFN)-β promoter activation, and 3) promotes the degradation of dsRNA-dependent protein kinase (PKR). MP-12 lacks a marker for differentiation of infected from vaccinated animals (DIVA). Although MP-12 lacking NSs works for DIVA, it does not replicate efficiently in type-I IFN-competent MRC-5 cells, while the use of type-I IFN-incompetent cells may negatively affect its genetic stability. To generate modified MP-12 vaccine candidates encoding a DIVA marker, while still replicating efficiently in MRC-5 cells, we generated recombinant MP-12 encoding Punta Toro virus Adames strain NSs (rMP12-PTNSs) or Sandfly fever Sicilian virus NSs (rMP12-SFSNSs) in place of MP-12 NSs. We have demonstrated that those recombinant MP-12 viruses inhibit IFN-β mRNA synthesis, yet do not promote the degradation of PKR. The rMP12-PTNSs, but not rMP12-SFSNSs, replicated more efficiently than recombinant MP-12 lacking NSs in MRC-5 cells. Mice vaccinated with rMP12-PTNSs or rMP12-SFSNSs induced neutralizing antibodies at a level equivalent to those vaccinated with MP-12, and were efficiently protected from wild-type RVFV challenge. The rMP12-PTNSs and rMP12-SFSNSs did not induce antibodies cross-reactive to anti-RVFV NSs antibody and are therefore applicable to DIVA. Thus, rMP12-PTNSs is highly efficacious, replicates efficiently in MRC-5 cells, and encodes a DIVA marker, all of which are important for vaccine development for Rift Valley fever. PMID:23638202

  19. Characterization of Rift Valley fever virus MP-12 strain encoding NSs of Punta Toro virus or sandfly fever Sicilian virus.

    PubMed

    Lihoradova, Olga A; Indran, Sabarish V; Kalveram, Birte; Lokugamage, Nandadeva; Head, Jennifer A; Gong, Bin; Tigabu, Bersabeh; Juelich, Terry L; Freiberg, Alexander N; Ikegami, Tetsuro

    2013-01-01

    Rift Valley fever virus (RVFV; genus Phlebovirus, family Bunyaviridae) is a mosquito-borne zoonotic pathogen which can cause hemorrhagic fever, neurological disorders or blindness in humans, and a high rate of abortion in ruminants. MP-12 strain, a live-attenuated candidate vaccine, is attenuated in the M- and L-segments, but the S-segment retains the virulent phenotype. MP-12 was manufactured as an Investigational New Drug vaccine by using MRC-5 cells and encodes a functional NSs gene, the major virulence factor of RVFV which 1) induces a shutoff of the host transcription, 2) inhibits interferon (IFN)-β promoter activation, and 3) promotes the degradation of dsRNA-dependent protein kinase (PKR). MP-12 lacks a marker for differentiation of infected from vaccinated animals (DIVA). Although MP-12 lacking NSs works for DIVA, it does not replicate efficiently in type-I IFN-competent MRC-5 cells, while the use of type-I IFN-incompetent cells may negatively affect its genetic stability. To generate modified MP-12 vaccine candidates encoding a DIVA marker, while still replicating efficiently in MRC-5 cells, we generated recombinant MP-12 encoding Punta Toro virus Adames strain NSs (rMP12-PTNSs) or Sandfly fever Sicilian virus NSs (rMP12-SFSNSs) in place of MP-12 NSs. We have demonstrated that those recombinant MP-12 viruses inhibit IFN-β mRNA synthesis, yet do not promote the degradation of PKR. The rMP12-PTNSs, but not rMP12-SFSNSs, replicated more efficiently than recombinant MP-12 lacking NSs in MRC-5 cells. Mice vaccinated with rMP12-PTNSs or rMP12-SFSNSs induced neutralizing antibodies at a level equivalent to those vaccinated with MP-12, and were efficiently protected from wild-type RVFV challenge. The rMP12-PTNSs and rMP12-SFSNSs did not induce antibodies cross-reactive to anti-RVFV NSs antibody and are therefore applicable to DIVA. Thus, rMP12-PTNSs is highly efficacious, replicates efficiently in MRC-5 cells, and encodes a DIVA marker, all of which are important for vaccine development for Rift Valley fever.

  20. Dietary sodium induces a redistribution of the tubular metabolic workload

    PubMed Central

    Udwan, Khalil; Abed, Ahmed; Roth, Isabelle; Dizin, Eva; Maillard, Marc; Bettoni, Carla; Loffing, Johannes; Wagner, Carsten A.; Edwards, Aurélie

    2017-01-01

    Key points Body Na+ content is tightly controlled by regulated urinary Na+ excretion.The intrarenal mechanisms mediating adaptation to variations in dietary Na+ intake are incompletely characterized.We confirmed and expanded observations in mice that variations in dietary Na+ intake do not alter the glomerular filtration rate but alter the total and cell‐surface expression of major Na+ transporters all along the kidney tubule.Low dietary Na+ intake increased Na+ reabsorption in the proximal tubule and decreased it in more distal kidney tubule segments.High dietary Na+ intake decreased Na+ reabsorption in the proximal tubule and increased it in distal segments with lower energetic efficiency.The abundance of apical transporters and Na+ delivery are the main determinants of Na+ reabsorption along the kidney tubule.Tubular O2 consumption and the efficiency of sodium reabsorption are dependent on sodium diet. Abstract Na+ excretion by the kidney varies according to dietary Na+ intake. We undertook a systematic study of the effects of dietary salt intake on glomerular filtration rate (GFR) and tubular Na+ reabsorption. We examined the renal adaptive response in mice subjected to 7 days of a low sodium diet (LSD) containing 0.01% Na+, a normal sodium diet (NSD) containing 0.18% Na+ and a moderately high sodium diet (HSD) containing 1.25% Na+. As expected, LSD did not alter measured GFR and increased the abundance of total and cell‐surface NHE3, NKCC2, NCC, α‐ENaC and cleaved γ‐ENaC compared to NSD. Mathematical modelling predicted that tubular Na+ reabsorption increased in the proximal tubule but decreased in the distal nephron because of diminished Na+ delivery. This prediction was confirmed by the natriuretic response to diuretics targeting the thick ascending limb, the distal convoluted tubule or the collecting system. On the other hand, HSD did not alter measured GFR but decreased the abundance of the aforementioned transporters compared to NSD. Mathematical modelling predicted that tubular Na+ reabsorption decreased in the proximal tubule but increased in distal segments with lower transport efficiency with respect to O2 consumption. This prediction was confirmed by the natriuretic response to diuretics. The activity of the metabolic sensor adenosine monophosphate‐activated protein kinase (AMPK) was related to the changes in tubular Na+ reabsorption. Our data show that fractional Na+ reabsorption is distributed differently according to dietary Na+ intake and induces changes in tubular O2 consumption and sodium transport efficiency. PMID:28940314

  1. A solar-pumped Nd:YAG laser in the high collection efficiency regime

    NASA Astrophysics Data System (ADS)

    Lando, Mordechai; Kagan, Jacob; Linyekin, Boris; Dobrusin, Vadim

    2003-07-01

    Solar-pumped lasers can be used for space and terrestrial applications. We report on solar side-pumped Nd:YAG laser experiments, which included comprehensive beam quality measurements and demonstrated record collection efficiency and day long operation. A 6.75 m 2 segmented primary mirror was mounted on a commercial two-axis positioner and focused the solar radiation towards a stationary non-imaging-optics secondary concentrator, which illuminated a Nd:YAG laser rod. Solar side-pumped laser experiments were conducted in both the low and the high pumping density regimes. The low density system was composed of a 89 × 98-mm 2 aperture two-dimensional compound parabolic concentrator (CPC) and a 10-mm diameter 130-mm long Nd:YAG laser rod. The laser emitted up to 46 W and operated continuously for 5 h. The high density system was composed of a three-dimensional CPC with 98 mm entrance diameter and 24 mm exit diameter, followed by a two-dimensional CPC with a rectangular 24 × 33 mm 2 aperture. It pumped a 6-mm diameter 72 mm long Nd:YAG laser rod, which emitted up to 45 W. The results constitute a record collection efficiency of 6.7 W/m 2 of primary mirror. We compare the current results to previous solar side-pumped laser experiments, including experiments at higher pumping density but with low collection efficiency. Finally, we present a scaled up design for a 400 W laser pumped by a solar collection area of 60 m 2, incorporating simultaneously high collection efficiency and high pumping density.

  2. High-voltage spark carbon-fiber sticky-tape data analyzer

    NASA Technical Reports Server (NTRS)

    Yang, L. C.; Hull, G. G.

    1980-01-01

    An efficient method for detecting carbon fibers collected on a stick tape monitor was developed. The fibers were released from a simulated crash fire situation containing carbon fiber composite material. The method utilized the ability of the fiber to initiate a spark across a set of alternately biased high voltage electrodes to electronically count the number of fiber fragments collected on the tape. It was found that the spark, which contains high energy and is of very short duration, is capable of partially damaging or consuming the fiber fragments. It also creates a mechanical disturbance which ejects the fiber from the grid. Both characteristics were helpful in establishing a single discharge pulse for each fiber segment.

  3. The Study of Residential Areas Extraction Based on GF-3 Texture Image Segmentation

    NASA Astrophysics Data System (ADS)

    Shao, G.; Luo, H.; Tao, X.; Ling, Z.; Huang, Y.

    2018-04-01

    The study chooses the standard stripe and dual polarization SAR images of GF-3 as the basic data. Residential areas extraction processes and methods based upon GF-3 images texture segmentation are compared and analyzed. GF-3 images processes include radiometric calibration, complex data conversion, multi-look processing, images filtering, and then conducting suitability analysis for different images filtering methods, the filtering result show that the filtering method of Kuan is efficient for extracting residential areas, then, we calculated and analyzed the texture feature vectors using the GLCM (the Gary Level Co-occurrence Matrix), texture feature vectors include the moving window size, step size and angle, the result show that window size is 11*11, step is 1, and angle is 0°, which is effective and optimal for the residential areas extracting. And with the FNEA (Fractal Net Evolution Approach), we segmented the GLCM texture images, and extracted the residential areas by threshold setting. The result of residential areas extraction verified and assessed by confusion matrix. Overall accuracy is 0.897, kappa is 0.881, and then we extracted the residential areas by SVM classification based on GF-3 images, the overall accuracy is less 0.09 than the accuracy of extraction method based on GF-3 Texture Image Segmentation. We reached the conclusion that residential areas extraction based on GF-3 SAR texture image multi-scale segmentation is simple and highly accurate. although, it is difficult to obtain multi-spectrum remote sensing image in southern China, in cloudy and rainy weather throughout the year, this paper has certain reference significance.

  4. Segmental isotopic labeling of HIV-1 capsid protein assemblies for solid state NMR.

    PubMed

    Gupta, Sebanti; Tycko, Robert

    2018-02-01

    Recent studies of noncrystalline HIV-1 capsid protein (CA) assemblies by our laboratory and by Polenova and coworkers (Protein Sci 19:716-730, 2010; J Mol Biol 426:1109-1127, 2014; J Biol Chem 291:13098-13112, 2016; J Am Chem Soc 138:8538-8546, 2016; J Am Chem Soc 138:12029-12032, 2016; J Am Chem Soc 134:6455-6466, 2012; J Am Chem Soc 132:1976-1987, 2010; J Am Chem Soc 135:17793-17803, 2013; Proc Natl Acad Sci USA 112:14617-14622, 2015; J Am Chem Soc 138:14066-14075, 2016) have established the capability of solid state nuclear magnetic resonance (NMR) measurements to provide site-specific structural and dynamical information that is not available from other types of measurements. Nonetheless, the relatively high molecular weight of HIV-1 CA leads to congestion of solid state NMR spectra of fully isotopically labeled assemblies that has been an impediment to further progress. Here we describe an efficient protocol for production of segmentally labeled HIV-1 CA samples in which either the N-terminal domain (NTD) or the C-terminal domain (CTD) is uniformly 15 N, 13 C-labeled. Segmental labeling is achieved by trans-splicing, using the DnaE split intein. Comparisons of two-dimensional solid state NMR spectra of fully labeled and segmentally labeled tubular CA assemblies show substantial improvements in spectral resolution. The molecular structure of HIV-1 assemblies is not significantly perturbed by the single Ser-to-Cys substitution that we introduce between NTD and CTD segments, as required for trans-splicing.

  5. Learning-Based Object Identification and Segmentation Using Dual-Energy CT Images for Security.

    PubMed

    Martin, Limor; Tuysuzoglu, Ahmet; Karl, W Clem; Ishwar, Prakash

    2015-11-01

    In recent years, baggage screening at airports has included the use of dual-energy X-ray computed tomography (DECT), an advanced technology for nondestructive evaluation. The main challenge remains to reliably find and identify threat objects in the bag from DECT data. This task is particularly hard due to the wide variety of objects, the high clutter, and the presence of metal, which causes streaks and shading in the scanner images. Image noise and artifacts are generally much more severe than in medical CT and can lead to splitting of objects and inaccurate object labeling. The conventional approach performs object segmentation and material identification in two decoupled processes. Dual-energy information is typically not used for the segmentation, and object localization is not explicitly used to stabilize the material parameter estimates. We propose a novel learning-based framework for joint segmentation and identification of objects directly from volumetric DECT images, which is robust to streaks, noise and variability due to clutter. We focus on segmenting and identifying a small set of objects of interest with characteristics that are learned from training images, and consider everything else as background. We include data weighting to mitigate metal artifacts and incorporate an object boundary field to reduce object splitting. The overall formulation is posed as a multilabel discrete optimization problem and solved using an efficient graph-cut algorithm. We test the method on real data and show its potential for producing accurate labels of the objects of interest without splits in the presence of metal and clutter.

  6. Segmentation of tumor ultrasound image in HIFU therapy based on texture and boundary encoding

    NASA Astrophysics Data System (ADS)

    Zhang, Dong; Xu, Menglong; Quan, Long; Yang, Yan; Qin, Qianqing; Zhu, Wenbin

    2015-02-01

    It is crucial in high intensity focused ultrasound (HIFU) therapy to detect the tumor precisely with less manual intervention for enhancing the therapy efficiency. Ultrasound image segmentation becomes a difficult task due to signal attenuation, speckle effect and shadows. This paper presents an unsupervised approach based on texture and boundary encoding customized for ultrasound image segmentation in HIFU therapy. The approach oversegments the ultrasound image into some small regions, which are merged by using the principle of minimum description length (MDL) afterwards. Small regions belonging to the same tumor are clustered as they preserve similar texture features. The mergence is completed by obtaining the shortest coding length from encoding textures and boundaries of these regions in the clustering process. The tumor region is finally selected from merged regions by a proposed algorithm without manual interaction. The performance of the method is tested on 50 uterine fibroid ultrasound images from HIFU guiding transducers. The segmentations are compared with manual delineations to verify its feasibility. The quantitative evaluation with HIFU images shows that the mean true positive of the approach is 93.53%, the mean false positive is 4.06%, the mean similarity is 89.92%, the mean norm Hausdorff distance is 3.62% and the mean norm maximum average distance is 0.57%. The experiments validate that the proposed method can achieve favorable segmentation without manual initialization and effectively handle the poor quality of the ultrasound guidance image in HIFU therapy, which indicates that the approach is applicable in HIFU therapy.

  7. Disc piezoelectric ceramic transformers.

    PubMed

    Erhart, Jirií; Půlpán, Petr; Doleček, Roman; Psota, Pavel; Lédl, Vít

    2013-08-01

    In this contribution, we present our study on disc-shaped and homogeneously poled piezoelectric ceramic transformers working in planar-extensional vibration modes. Transformers are designed with electrodes divided into wedge, axisymmetrical ring-dot, moonie, smile, or yin-yang segments. Transformation ratio, efficiency, and input and output impedances were measured for low-power signals. Transformer efficiency and transformation ratio were measured as a function of frequency and impedance load in the secondary circuit. Optimum impedance for the maximum efficiency has been found. Maximum efficiency and no-load transformation ratio can reach almost 100% and 52 for the fundamental resonance of ring-dot transformers and 98% and 67 for the second resonance of 2-segment wedge transformers. Maximum efficiency was reached at optimum impedance, which is in the range from 500 Ω to 10 kΩ, depending on the electrode pattern and size. Fundamental vibration mode and its overtones were further studied using frequency-modulated digital holographic interferometry and by the finite element method. Complementary information has been obtained by the infrared camera visualization of surface temperature profiles at higher driving power.

  8. A 3D global-to-local deformable mesh model based registration and anatomy-constrained segmentation method for image guided prostate radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Jinghao; Kim, Sung; Jabbour, Salma

    2010-03-15

    Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CTmore » (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to 6.54 mm for ASM. The volume overlap ratio ranged from 79% to 91% for ACRASM and from 44% to 80% for ASM. These data demonstrated that the segmentation results of ACRASM were in better agreement with the corresponding benchmarks than those of ASM. The developed registration algorithm was quantitatively evaluated by comparing the registered target volumes from the pCT to the benchmarks on the CBCT. The mean distance and the root mean square error ranged from 0.38 to 2.2 mm and from 0.45 to 2.36 mm, respectively, between the CBCT images and the registered pCT. The mean overlap ratio of the prostate volumes ranged from 85.2% to 95% after registration. The average time of the ACRASM-based segmentation was under 1 min. The average time of the global transformation was from 2 to 4 min on two 3D volumes and the average time of the local transformation was from 20 to 34 s on two deformable superquadrics mesh models. Conclusions: A novel and fast segmentation and deformable registration method was developed to capture the transformation between the planning and treatment images for external beam radiotherapy of prostate cancers. This method increases the computational efficiency and may provide foundation to achieve real time adaptive radiotherapy.« less

  9. Spatial aggregation of holistically-nested convolutional neural networks for automated pancreas localization and segmentation.

    PubMed

    Roth, Holger R; Lu, Le; Lay, Nathan; Harrison, Adam P; Farag, Amal; Sohn, Andrew; Summers, Ronald M

    2018-04-01

    Accurate and automatic organ segmentation from 3D radiological scans is an important yet challenging problem for medical image analysis. Specifically, as a small, soft, and flexible abdominal organ, the pancreas demonstrates very high inter-patient anatomical variability in both its shape and volume. This inhibits traditional automated segmentation methods from achieving high accuracies, especially compared to the performance obtained for other organs, such as the liver, heart or kidneys. To fill this gap, we present an automated system from 3D computed tomography (CT) volumes that is based on a two-stage cascaded approach-pancreas localization and pancreas segmentation. For the first step, we localize the pancreas from the entire 3D CT scan, providing a reliable bounding box for the more refined segmentation step. We introduce a fully deep-learning approach, based on an efficient application of holistically-nested convolutional networks (HNNs) on the three orthogonal axial, sagittal, and coronal views. The resulting HNN per-pixel probability maps are then fused using pooling to reliably produce a 3D bounding box of the pancreas that maximizes the recall. We show that our introduced localizer compares favorably to both a conventional non-deep-learning method and a recent hybrid approach based on spatial aggregation of superpixels using random forest classification. The second, segmentation, phase operates within the computed bounding box and integrates semantic mid-level cues of deeply-learned organ interior and boundary maps, obtained by two additional and separate realizations of HNNs. By integrating these two mid-level cues, our method is capable of generating boundary-preserving pixel-wise class label maps that result in the final pancreas segmentation. Quantitative evaluation is performed on a publicly available dataset of 82 patient CT scans using 4-fold cross-validation (CV). We achieve a (mean  ±  std. dev.) Dice similarity coefficient (DSC) of 81.27 ± 6.27% in validation, which significantly outperforms both a previous state-of-the art method and a preliminary version of this work that report DSCs of 71.80 ± 10.70% and 78.01 ± 8.20%, respectively, using the same dataset. Copyright © 2018. Published by Elsevier B.V.

  10. The Great Tohoku-Oki Earthquake and Tsunami of March 11, 2011 in Japan: A Critical Review and Evaluation of the Tsunami Source Mechanism

    NASA Astrophysics Data System (ADS)

    Pararas-Carayannis, George

    2014-12-01

    The great Tohoku-Oki earthquake of March 11, 2011 generated a very destructive and anomalously high tsunami. To understand its source mechanism, an examination was undertaken of the seismotectonics of the region and of the earthquake's focal mechanism, energy release, rupture patterns and spatial and temporal sequencing and clustering of major aftershocks. It was determined that the great tsunami resulted from a combination of crustal deformations of the ocean floor due to up-thrust tectonic motions, augmented by additional uplift due to the quake's slow and long rupturing process, as well as to large coseismic lateral movements which compressed and deformed the compacted sediments along the accretionary prism of the overriding plane. The deformation occurred randomly and non-uniformly along parallel normal faults and along oblique, en-echelon faults to the earthquake's overall rupture direction—the latter failing in a sequential bookshelf manner with variable slip angles. As the 1992 Nicaragua and the 2004 Sumatra earthquakes demonstrated, such bookshelf failures of sedimentary layers could contribute to anomalously high tsunamis. As with the 1896 tsunami, additional ocean floor deformation and uplift of the sediments was responsible for the higher waves generated by the 2011 earthquake. The efficiency of tsunami generation was greater along the shallow eastern segment of the fault off the Miyagi Prefecture where most of the energy release of the earthquake and the deformations occurred, while the segment off the Ibaraki Prefecture—where the rupture process was rapid—released less seismic energy, resulted in less compaction and deformation of sedimentary layers and thus to a tsunami of lesser offshore height. The greater tsunamigenic efficiency of the 2011 earthquake and high degree of the tsunami's destructiveness along Honshu's coastlines resulted from vertical crustal displacements of more than 10 m due to up-thrust faulting and from lateral compression and folding of sedimentary layers in an east-southeast direction which contributed additional uplift estimated at about 7 m—mainly along the leading segment of the accretionary prism of the overriding tectonic plate.

  11. TU-AB-202-05: GPU-Based 4D Deformable Image Registration Using Adaptive Tetrahedral Mesh Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Z; Zhuang, L; Gu, X

    Purpose: Deformable image registration (DIR) has been employed today as an automated and effective segmentation method to transfer tumor or organ contours from the planning image to daily images, instead of manual segmentation. However, the computational time and accuracy of current DIR approaches are still insufficient for online adaptive radiation therapy (ART), which requires real-time and high-quality image segmentation, especially in a large datasets of 4D-CT images. The objective of this work is to propose a new DIR algorithm, with fast computational speed and high accuracy, by using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step ismore » to generate the adaptive tetrahedral mesh based on the image features of a reference phase of 4D-CT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. Subsequently, the deformation vector fields (DVF) and other phases of 4D-CT can be obtained by matching each phase of the target 4D-CT images with the corresponding deformed reference phase. The proposed 4D DIR method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its parallel computing ability. Results: A 4D NCAT digital phantom was used to test the efficiency and accuracy of our method. Both the image and DVF results show that the fine structures and shapes of lung are well preserved, and the tumor position is well captured, i.e., 3D distance error is 1.14 mm. Compared to the previous voxel-based CPU implementation of DIR, such as demons, the proposed method is about 160x faster for registering a 10-phase 4D-CT with a phase dimension of 256×256×150. Conclusion: The proposed 4D DIR method uses feature-based mesh and GPU-based parallelism, which demonstrates the capability to compute both high-quality image and motion results, with significant improvement on the computational speed.« less

  12. Joint tumor segmentation and dense deformable registration of brain MR images.

    PubMed

    Parisot, Sarah; Duffau, Hugues; Chemouny, Stéphane; Paragios, Nikos

    2012-01-01

    In this paper we propose a novel graph-based concurrent registration and segmentation framework. Registration is modeled with a pairwise graphical model formulation that is modular with respect to the data and regularization term. Segmentation is addressed by adopting a similar graphical model, using image-based classification techniques while producing a smooth solution. The two problems are coupled via a relaxation of the registration criterion in the presence of tumors as well as a segmentation through a registration term aiming the separation between healthy and diseased tissues. Efficient linear programming is used to solve both problems simultaneously. State of the art results demonstrate the potential of our method on a large and challenging low-grade glioma data set.

  13. 3D Slicer as a tool for interactive brain tumor segmentation.

    PubMed

    Kikinis, Ron; Pieper, Steve

    2011-01-01

    User interaction is required for reliable segmentation of brain tumors in clinical practice and in clinical research. By incorporating current research tools, 3D Slicer provides a set of interactive, easy to use tools that can be efficiently used for this purpose. One of the modules of 3D Slicer is an interactive editor tool, which contains a variety of interactive segmentation effects. Use of these effects for fast and reproducible segmentation of a single glioblastoma from magnetic resonance imaging data is demonstrated. The innovation in this work lies not in the algorithm, but in the accessibility of the algorithm because of its integration into a software platform that is practical for research in a clinical setting.

  14. Apparatus and method for ultrasonic treatment of a liquid

    DOEpatents

    Chandler, Darrell P.; Posakony, Gerald J.; Bond, Leonard J.; Bruckner-Lea, Cynthia J.

    2006-04-04

    The present invention is an apparatus for ultrasonically treating a liquid to generate a product. The apparatus is capable of treating a continuously-flowing, or intermittently-flowing, liquid along a line segment coincident with the flow path of the liquid. The apparatus has one or more ultrasonic transducers positioned asymmetrically about the line segment. The ultrasonic field encompasses the line segment and the ultrasonic energy may be concentrated along the line segment. Lysing treatments have been successfully achieved with efficiencies of greater than 99% using ultrasound at MHz frequencies without erosion or heating problems and without the need for chemical or mechanical pretreatment, or contrast agents. The present invention overcomes drawbacks of current ultrasonic treatments beyond lysing and opens up new sonochemical and sonophysical processing opportunities.

  15. Basic test framework for the evaluation of text line segmentation and text parameter extraction.

    PubMed

    Brodić, Darko; Milivojević, Dragan R; Milivojević, Zoran

    2010-01-01

    Text line segmentation is an essential stage in off-line optical character recognition (OCR) systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms.

  16. Basic Test Framework for the Evaluation of Text Line Segmentation and Text Parameter Extraction

    PubMed Central

    Brodić, Darko; Milivojević, Dragan R.; Milivojević, Zoran

    2010-01-01

    Text line segmentation is an essential stage in off-line optical character recognition (OCR) systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms. PMID:22399932

  17. Automatic layer segmentation of H&E microscopic images of mice skin

    NASA Astrophysics Data System (ADS)

    Hussein, Saif; Selway, Joanne; Jassim, Sabah; Al-Assam, Hisham

    2016-05-01

    Mammalian skin is a complex organ composed of a variety of cells and tissue types. The automatic detection and quantification of changes in skin structures has a wide range of applications for biological research. To accurately segment and quantify nuclei, sebaceous gland, hair follicles, and other skin structures, there is a need for a reliable segmentation of different skin layers. This paper presents an efficient segmentation algorithm to segment the three main layers of mice skin, namely epidermis, dermis, and subcutaneous layers. It also segments the epidermis layer into two sub layers, basal and cornified layers. The proposed algorithm uses adaptive colour deconvolution technique on H&E stain images to separate different tissue structures, inter-modes and Otsu thresholding techniques were effectively combined to segment the layers. It then uses a set of morphological and logical operations on each layer to removing unwanted objects. A dataset of 7000 H&E microscopic images of mutant and wild type mice were used to evaluate the effectiveness of the algorithm. Experimental results examined by domain experts have confirmed the viability of the proposed algorithms.

  18. Multiresolution saliency map based object segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Wang, Xin; Dai, ZhenYou

    2015-11-01

    Salient objects' detection and segmentation are gaining increasing research interest in recent years. A saliency map can be obtained from different models presented in previous studies. Based on this saliency map, the most salient region (MSR) in an image can be extracted. This MSR, generally a rectangle, can be used as the initial parameters for object segmentation algorithms. However, to our knowledge, all of those saliency maps are represented in a unitary resolution although some models have even introduced multiscale principles in the calculation process. Furthermore, some segmentation methods, such as the well-known GrabCut algorithm, need more iteration time or additional interactions to get more precise results without predefined pixel types. A concept of a multiresolution saliency map is introduced. This saliency map is provided in a multiresolution format, which naturally follows the principle of the human visual mechanism. Moreover, the points in this map can be utilized to initialize parameters for GrabCut segmentation by labeling the feature pixels automatically. Both the computing speed and segmentation precision are evaluated. The results imply that this multiresolution saliency map-based object segmentation method is simple and efficient.

  19. Demonstration of an efficient cooling approach for SBIRS-Low

    NASA Astrophysics Data System (ADS)

    Nieczkoski, S. J.; Myers, E. A.

    2002-05-01

    The Space Based Infrared System-Low (SBIRS-Low) segment is a near-term Air Force program for developing and deploying a constellation of low-earth orbiting observation satellites with gimbaled optics cooled to cryogenic temperatures. The optical system design and requirements present unique challenges that make conventional cooling approaches both complicated and risky. The Cryocooler Interface System (CIS) provides a remote, efficient, and interference-free means of cooling the SBIRS-Low optics. Technology Applications Inc. (TAI), through a two-phase Small Business Innovative Research (SBIR) program with Air Force Research Laboratory (AFRL), has taken the CIS from initial concept feasibility through the design, build, and test of a prototype system. This paper presents the development and demonstration testing of the prototype CIS. Prototype system testing has demonstrated the high efficiency of this cooling approach, making it an attractive option for SBIRS-Low and other sensitive optical and detector systems that require low-impact cryogenic cooling.

  20. A European mobile satellite system concept exploiting CDMA and OBP

    NASA Technical Reports Server (NTRS)

    Vernucci, A.; Craig, A. D.

    1993-01-01

    This paper describes a novel Land Mobile Satellite System (LMSS) concept applicable to networks allowing access to a large number of gateway stations ('Hubs'), utilizing low-cost Very Small Aperture Terminals (VSAT's). Efficient operation of the Forward-Link (FL) repeater can be achieved by adopting a synchronous Code Division Multiple Access (CDMA) technique, whereby inter-code interference (self-noise) is virtually eliminated by synchronizing orthogonal codes. However, with a transparent FL repeater, the requirements imposed by the highly decentralized ground segment can lead to significant efficiency losses. The adoption of a FL On-Board Processing (OBP) repeater is proposed as a means of largely recovering this efficiency impairment. The paper describes the network architecture, the system design and performance, the OBP functions and impact on implementation. The proposed concept, applicable to a future generation of the European LMSS, was developed in the context of a European Space Agency (ESA) study contract.

  1. Energy efficient engine combustor test hardware detailed design report

    NASA Technical Reports Server (NTRS)

    Zeisser, M. H.; Greene, W.; Dubiel, D. J.

    1982-01-01

    The combustor for the Energy Efficient Engine is an annular, two-zone component. As designed, it either meets or exceeds all program goals for performance, safety, durability, and emissions, with the exception of oxides of nitrogen. When compared to the configuration investigated under the NASA-sponsored Experimental Clean Combustor Program, which was used as a basis for design, the Energy Efficient Engine combustor component has several technology advancements. The prediffuser section is designed with short, strutless, curved-walls to provide a uniform inlet airflow profile. Emissions control is achieved by a two-zone combustor that utilizes two types of fuel injectors to improve fuel atomization for more complete combustion. The combustor liners are a segmented configuration to meet the durability requirements at the high combustor operating pressures and temperatures. Liner cooling is accomplished with a counter-parallel FINWALL technique, which provides more effective heat transfer with less coolant.

  2. Efficient Exact Inference With Loss Augmented Objective in Structured Learning.

    PubMed

    Bauer, Alexander; Nakajima, Shinichi; Muller, Klaus-Robert

    2016-08-19

    Structural support vector machine (SVM) is an elegant approach for building complex and accurate models with structured outputs. However, its applicability relies on the availability of efficient inference algorithms--the state-of-the-art training algorithms repeatedly perform inference to compute a subgradient or to find the most violating configuration. In this paper, we propose an exact inference algorithm for maximizing nondecomposable objectives due to special type of a high-order potential having a decomposable internal structure. As an important application, our method covers the loss augmented inference, which enables the slack and margin scaling formulations of structural SVM with a variety of dissimilarity measures, e.g., Hamming loss, precision and recall, Fβ-loss, intersection over union, and many other functions that can be efficiently computed from the contingency table. We demonstrate the advantages of our approach in natural language parsing and sequence segmentation applications.

  3. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, K.M.; Meservey, R.H.; Landon, M.D.

    1999-08-10

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D and D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded. 3 figs.

  4. Sloped terrain segmentation for autonomous drive using sparse 3D point cloud.

    PubMed

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Jeong, Young-Sik; Um, Kyhyun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.

  5. Research on segmentation based on multi-atlas in brain MR image

    NASA Astrophysics Data System (ADS)

    Qian, Yuejing

    2018-03-01

    Accurate segmentation of specific tissues in brain MR image can be effectively achieved with the multi-atlas-based segmentation method, and the accuracy mainly depends on the image registration accuracy and fusion scheme. This paper proposes an automatic segmentation method based on the multi-atlas for brain MR image. Firstly, to improve the registration accuracy in the area to be segmented, we employ a target-oriented image registration method for the refinement. Then In the label fusion, we proposed a new algorithm to detect the abnormal sparse patch and simultaneously abandon the corresponding abnormal sparse coefficients, this method is made based on the remaining sparse coefficients combined with the multipoint label estimator strategy. The performance of the proposed method was compared with those of the nonlocal patch-based label fusion method (Nonlocal-PBM), the sparse patch-based label fusion method (Sparse-PBM) and majority voting method (MV). Based on our experimental results, the proposed method is efficient in the brain MR images segmentation compared with MV, Nonlocal-PBM, and Sparse-PBM methods.

  6. Non coding extremities of the seven influenza virus type C vRNA segments: effect on transcription and replication by the type C and type A polymerase complexes

    PubMed Central

    Crescenzo-Chaigne, Bernadette; Barbezange, Cyril; van der Werf, Sylvie

    2008-01-01

    Background The transcription/replication of the influenza viruses implicate the terminal nucleotide sequences of viral RNA, which comprise sequences at the extremities conserved among the genomic segments as well as variable 3' and 5' non-coding (NC) regions. The plasmid-based system for the in vivo reconstitution of functional ribonucleoproteins, upon expression of viral-like RNAs together with the nucleoprotein and polymerase proteins has been widely used to analyze transcription/replication of influenza viruses. It was thus shown that the type A polymerase could transcribe and replicate type A, B, or C vRNA templates whereas neither type B nor type C polymerases were able to transcribe and replicate type A templates efficiently. Here we studied the importance of the NC regions from the seven segments of type C influenza virus for efficient transcription/replication by the type A and C polymerases. Results The NC sequences of the seven genomic segments of the type C influenza virus C/Johannesburg/1/66 strain were found to be more variable in length than those of the type A and B viruses. The levels of transcription/replication of viral-like vRNAs harboring the NC sequences of the respective type C virus segments flanking the CAT reporter gene were comparable in the presence of either type C or type A polymerase complexes except for the NS and PB2-like vRNAs. For the NS-like vRNA, the transcription/replication level was higher after introduction of a U residue at position 6 in the 5' NC region as for all other segments. For the PB2-like vRNA the CAT expression level was particularly reduced with the type C polymerase. Analysis of mutants of the 5' NC sequence in the PB2-like vRNA, the shortest 5' NC sequence among the seven segments, showed that additional sequences within the PB2 ORF were essential for the efficiency of transcription but not replication by the type C polymerase complex. Conclusion In the context of a PB2-like reporter vRNA template, the sequence upstream the polyU stretch plays a role in the transcription/replication process by the type C polymerase complex. PMID:18973655

  7. Improved 3D live-wire method with application to 3D CT chest image analysis

    NASA Astrophysics Data System (ADS)

    Lu, Kongkuo; Higgins, William E.

    2006-03-01

    The definition of regions of interests (ROIs), such as suspect cancer nodules or lymph nodes in 3D CT chest images, is often difficult because of the complexity of the phenomena that give rise to them. Manual slice tracing has been used widely for years for such problems, because it is easy to implement and guaranteed to work. But the manual method is extremely time-consuming, especially for high-solution 3D images which may have hundreds of slices, and it is subject to operator biases. Numerous automated image-segmentation methods have been proposed, but they are generally strongly application dependent, and even the "most robust" methods have difficulty in defining complex anatomical ROIs. To address this problem, the semi-automatic interactive paradigm referred to as "live wire" segmentation has been proposed by researchers. In live-wire segmentation, the human operator interactively defines an ROI's boundary guided by an active automated method which suggests what to define. This process in general is far faster, more reproducible and accurate than manual tracing, while, at the same time, permitting the definition of complex ROIs having ill-defined boundaries. We propose a 2D live-wire method employing an improved cost over previous works. In addition, we define a new 3D live-wire formulation that enables rapid definition of 3D ROIs. The method only requires the human operator to consider a few slices in general. Experimental results indicate that the new 2D and 3D live-wire approaches are efficient, allow for high reproducibility, and are reliable for 2D and 3D object segmentation.

  8. Using deep learning in image hyper spectral segmentation, classification, and detection

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Su, Zhenyu

    2018-02-01

    Recent years have shown that deep learning neural networks are a valuable tool in the field of computer vision. Deep learning method can be used in applications like remote sensing such as Land cover Classification, Detection of Vehicle in Satellite Images, Hyper spectral Image classification. This paper addresses the use of the deep learning artificial neural network in Satellite image segmentation. Image segmentation plays an important role in image processing. The hue of the remote sensing image often has a large hue difference, which will result in the poor display of the images in the VR environment. Image segmentation is a pre processing technique applied to the original images and splits the image into many parts which have different hue to unify the color. Several computational models based on supervised, unsupervised, parametric, probabilistic region based image segmentation techniques have been proposed. Recently, one of the machine learning technique known as, deep learning with convolution neural network has been widely used for development of efficient and automatic image segmentation models. In this paper, we focus on study of deep neural convolution network and its variants for automatic image segmentation rather than traditional image segmentation strategies.

  9. Device for absorbing mechanical shock

    DOEpatents

    Newlon, Charles E.

    1980-01-01

    This invention is a comparatively inexpensive but efficient shock-absorbing device having special application to the protection of shipping and storage cylinders. In a typical application, two of the devices are strapped to a cylinder to serve as saddle-type supports for the cylinder during storage and to protect the cylinder in the event it is dropped during lifting or lowering operations. In its preferred form, the invention includes a hardwood plank whose grain runs in the longitudinal direction. The basal portion of the plank is of solid cross-section, whereas the upper face of the plank is cut away to form a concave surface fittable against the sidewall of a storage cylinder. The concave surface is divided into a series of segments by transversely extending, throughgoing relief slots. A layer of elastomeric material is positioned on the concave face, the elastomer being extrudable into slots when pressed against the segments by a preselected pressure characteristic of a high-energy impact. The compressive, tensile, and shear properties of the hardwood and the elastomer are utilized in combination to provide a surprisingly high energy-absorption capability.

  10. Device for absorbing mechanical shock

    DOEpatents

    Newlon, C.E.

    1979-08-29

    This invention is a comparatively inexpensive but efficient shock-absorbing device having special application to the protection of shipping and storage cylinders. In a typical application, two of the devices are strapped to a cylinder to serve as saddle-type supports for the cylinder during storage and to protect the cylinder in the event it is dropped during lifting or lowering operations. In its preferred form, the invention includes a hardwood plank whose grain runs in the longitudinal direction. The basal portion of the plank is of solid cross-section, whereas the upper face of the plank is cut away to form a concave surface fittable against the sidewall of a storage cylinder. The concave surface is divided into a series of segments by transversely extending, throughgoing relief slots. A layer of elastomeric material is positioned on the concave face, the elastomer being extrudable into slots when pressed against the segments by a preselected pressure characteristic of a high-energy impact. The compressive, tensile, and shear properties of the hardwood and the elastomer are utilized in combination to provide a surprisingly high energy-absorption capability.

  11. ASM Based Synthesis of Handwritten Arabic Text Pages

    PubMed Central

    Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif; Ghoneim, Ahmed

    2015-01-01

    Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available. PMID:26295059

  12. Uncertainty aggregation and reduction in structure-material performance prediction

    NASA Astrophysics Data System (ADS)

    Hu, Zhen; Mahadevan, Sankaran; Ao, Dan

    2018-02-01

    An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.

  13. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.

  14. Targeted tandem duplication of a large chromosomal segment in Aspergillus oryzae.

    PubMed

    Takahashi, Tadashi; Sato, Atsushi; Ogawa, Masahiro; Hanya, Yoshiki; Oguma, Tetsuya

    2014-08-01

    We describe here the first successful construction of a targeted tandem duplication of a large chromosomal segment in Aspergillus oryzae. The targeted tandem chromosomal duplication was achieved by using strains that had a 5'-deleted pyrG upstream of the region targeted for tandem chromosomal duplication and a 3'-deleted pyrG downstream of the target region. Consequently,strains bearing a 210-kb targeted tandem chromosomal duplication near the centromeric region of chromosome 8 and strains bearing a targeted tandem chromosomal duplication of a 700-kb region of chromosome 2 were successfully constructed. The strains bearing the tandem chromosomal duplication were efficiently obtained from the regenerated protoplast of the parental strains. However, the generation of the chromosomal duplication did not depend on the introduction of double-stranded breaks(DSBs) by I-SceI. The chromosomal duplications of these strains were stably maintained after five generations of culture under nonselective conditions. The strains bearing the tandem chromosomal duplication in the 700-kb region of chromosome 2 showed highly increased protease activity in solid-state culture, indicating that the duplication of large chromosomal segments could be a useful new breeding technology and gene analysis method.

  15. Scope & Limitations of Fmoc Chemistry SPPS-Based Approaches to the Total Synthesis of Insulin Lispro via Ester Insulin

    PubMed Central

    Dhayalan, Balamurugan; Mandal, Kalyaneswar; Rege, Nischay; Weiss, Michael A.; Eitel, Simon H.; Meier, Thomas; Schoenleber, Ralph O.; Kent, Stephen B.H.

    2017-01-01

    We have systematically explored three approaches based on Fmoc chemistry SPPS for the total chemical synthesis of the key depsipeptide intermediate for the efficient total chemical synthesis of insulin. The approaches used were: stepwise Fmoc chemistry SPPS; the ‘hybrid method’, in which maximally-protected peptide segments made by Fmoc chemistry SPPS are condensed in solution; and, native chemical ligation using peptide-thioester segments generated by Fmoc chemistry SPPS. A key building block in all three approaches was a Glu[Oβ(Thr)] ester-linked dipeptide equipped with a set of orthogonal protecting groups compatible with Fmoc chemistry SPPS. The most effective method for the preparation of the 51 residue ester-linked polypeptide chain of ester insulin was the use of unprotected peptide-thioester segments, prepared from peptide-hydrazides synthesized by Fmoc chemistry SPPS, and condensed by native chemical ligation. High resolution X-ray crystallography confirmed the disulfide pairings and three-dimensional structure of synthetic insulin lispro prepared from ester insulin lispro by this route. Further optimization of these pilot studies should yield an effective total chemical synthesis of insulin lispro (Humalog) based on peptide synthesis by Fmoc chemistry SPPS. PMID:27905149

  16. ASM Based Synthesis of Handwritten Arabic Text Pages.

    PubMed

    Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif; Ghoneim, Ahmed

    2015-01-01

    Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available.

  17. Simultaneous two-view epipolar geometry estimation and motion segmentation by 4D tensor voting.

    PubMed

    Tong, Wai-Shun; Tang, Chi-Keung; Medioni, Gérard

    2004-09-01

    We address the problem of simultaneous two-view epipolar geometry estimation and motion segmentation from nonstatic scenes. Given a set of noisy image pairs containing matches of n objects, we propose an unconventional, efficient, and robust method, 4D tensor voting, for estimating the unknown n epipolar geometries, and segmenting the static and motion matching pairs into n independent motions. By considering the 4D isotropic and orthogonal joint image space, only two tensor voting passes are needed, and a very high noise to signal ratio (up to five) can be tolerated. Epipolar geometries corresponding to multiple, rigid motions are extracted in succession. Only two uncalibrated frames are needed, and no simplifying assumption (such as affine camera model or homographic model between images) other than the pin-hole camera model is made. Our novel approach consists of propagating a local geometric smoothness constraint in the 4D joint image space, followed by global consistency enforcement for extracting the fundamental matrices corresponding to independent motions. We have performed extensive experiments to compare our method with some representative algorithms to show that better performance on nonstatic scenes are achieved. Results on challenging data sets are presented.

  18. Cognitive load during route selection increases reliance on spatial heuristics.

    PubMed

    Brunyé, Tad T; Martis, Shaina B; Taylor, Holly A

    2018-05-01

    Planning routes from maps involves perceiving the symbolic environment, identifying alternate routes and applying explicit strategies and implicit heuristics to select an option. Two implicit heuristics have received considerable attention, the southern route preference and initial segment strategy. This study tested a prediction from decision-making theory that increasing cognitive load during route planning will increase reliance on these heuristics. In two experiments, participants planned routes while under conditions of minimal (0-back) or high (2-back) working memory load. In Experiment 1, we examined how memory load impacts the southern route heuristic. In Experiment 2, we examined how memory load impacts the initial segment heuristic. Results replicated earlier results demonstrating a southern route preference (Experiment 1) and initial segment strategy (Experiment 2) and further demonstrated that evidence for heuristic reliance is more likely under conditions of concurrent working memory load. Furthermore, the extent to which participants maintained efficient route selection latencies in the 2-back condition predicted the magnitude of this effect. Together, results demonstrate that working memory load increases the application of heuristics during spatial decision making, particularly when participants attempt to maintain quick decisions while managing concurrent task demands.

  19. Latency of TCP applications over the ATM-WAN using the GFR service category

    NASA Astrophysics Data System (ADS)

    Chen, Kuo-Hsien; Siliquini, John F.; Budrikis, Zigmantas

    1998-10-01

    The GFR service category has been proposed for data services in ATM networks. Since users are ultimately interested in data service that provide high efficiency and low latency, it is important to study the latency performance for data traffic of the GFR service category in an ATM network. Today much of the data traffic utilizes the TCP/IP protocol suite and in this paper we study through simulation the latency of TCP applications running over a wide-area ATM network utilizing the GFR service category using a realistic TCP traffic model. From this study, we find that during congestion periods the reserved bandwidth in GFR can improve the latency performance for TCP applications. However, due to TCP 'Slow Start' data segment generation dynamics, we show that a large proportion of TCP segments are discarded under network congestion even when the reserved bandwidth is equal to the average generated rate of user data. Therefore, a user experiences worse than expected latency performance when the network is congested. In this study we also examine the effects of segment size on the latency performance of TCP applications using the GFR service category.

  20. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines

    PubMed Central

    Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213

Top