NASA Technical Reports Server (NTRS)
Trenchard, M. H. (Principal Investigator)
1980-01-01
Procedures and techniques for providing analyses of meteorological conditions at segments during the growing season were developed for the U.S./Canada Wheat and Barley Exploratory Experiment. The main product and analysis tool is the segment-level climagraph which depicts temporally meteorological variables for the current year compared with climatological normals. The variable values for the segment are estimates derived through objective analysis of values obtained at first-order station in the region. The procedures and products documented represent a baseline for future Foreign Commodity Production Forecasting experiments.
Engineering flight and guest pilot evaluation report, phase 2. [DC 8 aircraft
NASA Technical Reports Server (NTRS)
Morrison, J. A.; Anderson, E. B.; Brown, G. W.; Schwind, G. K.
1974-01-01
Prior to the flight evaluation, the two-segment profile capabilities of the DC-8-61 were evaluated and flight procedures were developed in a flight simulator at the UA Flight Training Center in Denver, Colorado. The flight evaluation reported was conducted to determine the validity of the simulation results, further develop the procedures and use of the area navigation system in the terminal area, certify the system for line operation, and obtain evaluations of the system and procedures by a number of pilots from the industry. The full area navigation capabilities of the special equipment installed were developed to provide terminal area guidance for two-segment approaches. The objectives of this evaluation were: (1) perform an engineering flight evaluation sufficient to certify the two-segment system for the six-month in-service evaluation; (2) evaluate the suitability of a modified RNAV system for flying two-segment approaches; and (3) provide evaluation of the two-segment approach by management and line pilots.
Howell, Peter; Sackin, Stevie; Glenn, Kazan
2007-01-01
This program of work is intended to develop automatic recognition procedures to locate and assess stuttered dysfluencies. This and the following article together, develop and test recognizers for repetitions and prolongations. The automatic recognizers classify the speech in two stages: In the first, the speech is segmented and in the second the segments are categorized. The units that are segmented are words. Here assessments by human judges on the speech of 12 children who stutter are described using a corresponding procedure. The accuracy of word boundary placement across judges, categorization of the words as fluent, repetition or prolongation, and duration of the different fluency categories are reported. These measures allow reliable instances of repetitions and prolongations to be selected for training and assessing the recognizers in the subsequent paper. PMID:9328878
Operational flight evaluation of the two-segment approach for use in airline service
NASA Technical Reports Server (NTRS)
Schwind, G. K.; Morrison, J. A.; Nylen, W. E.; Anderson, E. B.
1975-01-01
United Airlines has developed and evaluated a two-segment noise abatement approach procedure for use on Boeing 727 aircraft in air carrier service. In a flight simulator, the two-segment approach was studied in detail and a profile and procedures were developed. Equipment adaptable to contemporary avionics and navigation systems was designed and manufactured by Collins Radio Company and was installed and evaluated in B-727-200 aircraft. The equipment, profile, and procedures were evaluated out of revenue service by pilots representing government agencies, airlines, airframe manufacturers, and professional pilot associations. A system was then placed into scheduled airline service for six months during which 555 two-segment approaches were flown at three airports by 55 airline pilots. The system was determined to be safe, easy to fly, and compatible with the airline operational environment.
Site conditions related to erosion on logging roads
R. M. Rice; J. D. McCashion
1985-01-01
Synopsis - Data collected from 299 road segments in northwestern California were used to develop and test a procedure for estimating and managing road-related erosion. Site conditions and the design of each segment were described by 30 variables. Equations developed using 149 of the road segments were tested on the other 150. The best multiple regression equation...
NASA Technical Reports Server (NTRS)
Nylen, W. E.
1974-01-01
Guest pilot evaluation results of an approach profile modification for reducing ground level noise under the approach of jet aircraft runways are reported. Evaluation results were used to develop a two segmented landing approach procedure and equipment necessary to obtain pilot, airline, and FAA acceptance of the two segmented flight as a routine way of operating aircraft on approach and landing. Data are given on pilot workload and acceptance of the procedure.
Loarie, Thomas M; Applegate, David; Kuenne, Christopher B; Choi, Lawrence J; Horowitz, Diane P
2003-01-01
Market segmentation analysis identifies discrete segments of the population whose beliefs are consistent with exhibited behaviors such as purchase choice. This study applies market segmentation analysis to low myopes (-1 to -3 D with less than 1 D cylinder) in their consideration and choice of a refractive surgery procedure to discover opportunities within the market. A quantitative survey based on focus group research was sent to a demographically balanced sample of myopes using contact lenses and/or glasses. A variable reduction process followed by a clustering analysis was used to discover discrete belief-based segments. The resulting segments were validated both analytically and through in-market testing. Discontented individuals who wear contact lenses are the primary target for vision correction surgery. However, 81% of the target group is apprehensive about laser in situ keratomileusis (LASIK). They are nervous about the procedure and strongly desire reversibility and exchangeability. There exists a large untapped opportunity for vision correction surgery within the low myope population. Market segmentation analysis helped determine how to best meet this opportunity through repositioning existing procedures or developing new vision correction technology, and could also be applied to identify opportunities in other vision correction populations.
Flight evaluation of two-segment approaches using area navigation guidance equipment
NASA Technical Reports Server (NTRS)
Schwind, G. K.; Morrison, J. A.; Nylen, W. E.; Anderson, E. B.
1976-01-01
A two-segment noise abatement approach procedure for use on DC-8-61 aircraft in air carrier service was developed and evaluated. The approach profile and procedures were developed in a flight simulator. Full guidance is provided throughout the approach by a Collins Radio Company three-dimensional area navigation (RNAV) system which was modified to provide the two-segment approach capabilities. Modifications to the basic RNAV software included safety protection logic considered necessary for an operationally acceptable two-segment system. With an aircraft out of revenue service, the system was refined and extensively flight tested, and the profile and procedures were evaluated by representatives of the airlines, airframe manufacturers, the Air Line Pilots Association, and the Federal Aviation Adminstration. The system was determined to be safe and operationally acceptable. It was then placed into scheduled airline service for an evaluation during which 180 approaches were flown by 48 airline pilots. The approach was determined to be compatible with the airline operational environment, although operation of the RNAV system in the existing terminal area air traffic control environment was difficult.
Adaptive segmentation of nuclei in H&S stained tendon microscopy
NASA Astrophysics Data System (ADS)
Chuang, Bo-I.; Wu, Po-Ting; Hsu, Jian-Han; Jou, I.-Ming; Su, Fong-Chin; Sun, Yung-Nien
2015-12-01
Tendiopathy is a popular clinical issue in recent years. In most cases like trigger finger or tennis elbow, the pathology change can be observed under H and E stained tendon microscopy. However, the qualitative analysis is too subjective and thus the results heavily depend on the observers. We develop an automatic segmentation procedure which segments and counts the nuclei in H and E stained tendon microscopy fast and precisely. This procedure first determines the complexity of images and then segments the nuclei from the image. For the complex images, the proposed method adopts sampling-based thresholding to segment the nuclei. While for the simple images, the Laplacian-based thresholding is employed to re-segment the nuclei more accurately. In the experiments, the proposed method is compared with the experts outlined results. The nuclei number of proposed method is closed to the experts counted, and the processing time of proposed method is much faster than the experts'.
Duncan, James R; Kline, Benjamin; Glaiberman, Craig B
2007-04-01
To create and test methods of extracting efficiency data from recordings of simulated renal stent procedures. Task analysis was performed and used to design a standardized testing protocol. Five experienced angiographers then performed 16 renal stent simulations using the Simbionix AngioMentor angiographic simulator. Audio and video recordings of these simulations were captured from multiple vantage points. The recordings were synchronized and compiled. A series of efficiency metrics (procedure time, contrast volume, and tool use) were then extracted from the recordings. The intraobserver and interobserver variability of these individual metrics was also assessed. The metrics were converted to costs and aggregated to determine the fixed and variable costs of a procedure segment or the entire procedure. Task analysis and pilot testing led to a standardized testing protocol suitable for performance assessment. Task analysis also identified seven checkpoints that divided the renal stent simulations into six segments. Efficiency metrics for these different segments were extracted from the recordings and showed excellent intra- and interobserver correlations. Analysis of the individual and aggregated efficiency metrics demonstrated large differences between segments as well as between different angiographers. These differences persisted when efficiency was expressed as either total or variable costs. Task analysis facilitated both protocol development and data analysis. Efficiency metrics were readily extracted from recordings of simulated procedures. Aggregating the metrics and dividing the procedure into segments revealed potential insights that could be easily overlooked because the simulator currently does not attempt to aggregate the metrics and only provides data derived from the entire procedure. The data indicate that analysis of simulated angiographic procedures will be a powerful method of assessing performance in interventional radiology.
Modification to area navigation equipment for instrument two-segment approaches
NASA Technical Reports Server (NTRS)
1975-01-01
A two-segment aircraft landing approach concept utilizing an area random navigation (RNAV) system to execute the two-segment approach and eliminate the requirements for co-located distance measuring equipment (DME) was investigated. This concept permits non-precision approaches to be made to runways not equipped with ILS systems, down to appropriate minima. A hardware and software retrofit kit for the concept was designed, built, and tested on a DC-8-61 aircraft for flight evaluation. A two-segment approach profile and piloting procedure for that aircraft that will provide adequate safety margin under adverse weather, in the presence of system failures, and with the occurrence of an abused approach, was also developed. The two-segment approach procedure and equipment was demonstrated to line pilots under conditions which are representative of those encountered in air carrier service.
NASA Technical Reports Server (NTRS)
Colwell, R. N. (Principal Investigator); Hay, C. M.; Thomas, R. W.; Benson, A. S.
1977-01-01
Progress in the evaluation of the static stratification procedure and the development of alternative photointerpretive techniques to the present LACIE procedure for the identification of training fields is reported. Statistically significant signature controlling variables were defined for use in refining the stratification procedure. A subset of the 1973-74 Kansas LACIE segments for wheat was analyzed.
Three-dimensional illumination procedure for photodynamic therapy of dermatology
NASA Astrophysics Data System (ADS)
Hu, Xiao-ming; Zhang, Feng-juan; Dong, Fei; Zhou, Ya
2014-09-01
Light dosimetry is an important parameter that affects the efficacy of photodynamic therapy (PDT). However, the irregular morphologies of lesions complicate lesion segmentation and light irradiance adjustment. Therefore, this study developed an illumination demo system comprising a camera, a digital projector, and a computing unit to solve these problems. A three-dimensional model of a lesion was reconstructed using the developed system. Hierarchical segmentation was achieved with the superpixel algorithm. The expected light dosimetry on the targeted lesion was achieved with the proposed illumination procedure. Accurate control and optimization of light delivery can improve the efficacy of PDT.
Iris recognition: on the segmentation of degraded images acquired in the visible wavelength.
Proença, Hugo
2010-08-01
Iris recognition imaging constraints are receiving increasing attention. There are several proposals to develop systems that operate in the visible wavelength and in less constrained environments. These imaging conditions engender acquired noisy artifacts that lead to severely degraded images, making iris segmentation a major issue. Having observed that existing iris segmentation methods tend to fail in these challenging conditions, we present a segmentation method that can handle degraded images acquired in less constrained conditions. We offer the following contributions: 1) to consider the sclera the most easily distinguishable part of the eye in degraded images, 2) to propose a new type of feature that measures the proportion of sclera in each direction and is fundamental in segmenting the iris, and 3) to run the entire procedure in deterministically linear time in respect to the size of the image, making the procedure suitable for real-time applications.
Modeling of market segmentation for new IT product development
NASA Astrophysics Data System (ADS)
Nasiopoulos, Dimitrios K.; Sakas, Damianos P.; Vlachos, D. S.; Mavrogianni, Amanda
2015-02-01
Businesses from all Information Technology sectors use market segmentation[1] in their product development[2] and strategic planning[3]. Many studies have concluded that market segmentation is considered as the norm of modern marketing. With the rapid development of technology, customer needs are becoming increasingly diverse. These needs can no longer be satisfied by a mass marketing approach and follow one rule. IT Businesses can face with this diversity by pooling customers[4] with similar requirements and buying behavior and strength into segments. The result of the best choices about which segments are the most appropriate to serve can then be made, thus making the best of finite resources. Despite the attention which segmentation gathers and the resources that are invested in it, growing evidence suggests that businesses have problems operationalizing segmentation[5]. These problems take various forms. There may have been a rule that the segmentation process necessarily results in homogeneous groups of customers for whom appropriate marketing programs and procedures for dealing with them can be developed. Then the segmentation process, that a company follows, can fail. This increases concerns about what causes segmentation failure and how it might be overcome. To prevent the failure, we created a dynamic simulation model of market segmentation[6] based on the basic factors leading to this segmentation.
Gazder, Azdiar A; Al-Harbi, Fayez; Spanke, Hendrik Th; Mitchell, David R G; Pereloma, Elena V
2014-12-01
Using a combination of electron back-scattering diffraction and energy dispersive X-ray spectroscopy data, a segmentation procedure was developed to comprehensively distinguish austenite, martensite, polygonal ferrite, ferrite in granular bainite and bainitic ferrite laths in a thermo-mechanically processed low-Si, high-Al transformation-induced plasticity steel. The efficacy of the ferrite morphologies segmentation procedure was verified by transmission electron microscopy. The variation in carbon content between the ferrite in granular bainite and bainitic ferrite laths was explained on the basis of carbon partitioning during their growth. Copyright © 2014 Elsevier B.V. All rights reserved.
Automated detection of videotaped neonatal seizures of epileptic origin.
Karayiannis, Nicolaos B; Xiong, Yaohua; Tao, Guozhi; Frost, James D; Wise, Merrill S; Hrachovy, Richard A; Mizrahi, Eli M
2006-06-01
This study aimed at the development of a seizure-detection system by training neural networks with quantitative motion information extracted from short video segments of neonatal seizures of the myoclonic and focal clonic types and random infant movements. The motion of the infants' body parts was quantified by temporal motion-strength signals extracted from video segments by motion-segmentation methods based on optical flow computation. The area of each frame occupied by the infants' moving body parts was segmented by clustering the motion parameters obtained by fitting an affine model to the pixel velocities. The motion of the infants' body parts also was quantified by temporal motion-trajectory signals extracted from video recordings by robust motion trackers based on block-motion models. These motion trackers were developed to adjust autonomously to illumination and contrast changes that may occur during the video-frame sequence. Video segments were represented by quantitative features obtained by analyzing motion-strength and motion-trajectory signals in both the time and frequency domains. Seizure recognition was performed by conventional feed-forward neural networks, quantum neural networks, and cosine radial basis function neural networks, which were trained to detect neonatal seizures of the myoclonic and focal clonic types and to distinguish them from random infant movements. The computational tools and procedures developed for automated seizure detection were evaluated on a set of 240 video segments of 54 patients exhibiting myoclonic seizures (80 segments), focal clonic seizures (80 segments), and random infant movements (80 segments). Regardless of the decision scheme used for interpreting the responses of the trained neural networks, all the neural network models exhibited sensitivity and specificity>90%. For one of the decision schemes proposed for interpreting the responses of the trained neural networks, the majority of the trained neural-network models exhibited sensitivity>90% and specificity>95%. In particular, cosine radial basis function neural networks achieved the performance targets of this phase of the project (i.e., sensitivity>95% and specificity>95%). The best among the motion segmentation and tracking methods developed in this study produced quantitative features that constitute a reliable basis for detecting neonatal seizures. The performance targets of this phase of the project were achieved by combining the quantitative features obtained by analyzing motion-strength signals with those produced by analyzing motion-trajectory signals. The computational procedures and tools developed in this study to perform off-line analysis of short video segments will be used in the next phase of this project, which involves the integration of these procedures and tools into a system that can process and analyze long video recordings of infants monitored for seizures in real time.
Segmented Domain Decomposition Multigrid For 3-D Turbomachinery Flows
NASA Technical Reports Server (NTRS)
Celestina, M. L.; Adamczyk, J. J.; Rubin, S. G.
2001-01-01
A Segmented Domain Decomposition Multigrid (SDDMG) procedure was developed for three-dimensional viscous flow problems as they apply to turbomachinery flows. The procedure divides the computational domain into a coarse mesh comprised of uniformly spaced cells. To resolve smaller length scales such as the viscous layer near a surface, segments of the coarse mesh are subdivided into a finer mesh. This is repeated until adequate resolution of the smallest relevant length scale is obtained. Multigrid is used to communicate information between the different grid levels. To test the procedure, simulation results will be presented for a compressor and turbine cascade. These simulations are intended to show the ability of the present method to generate grid independent solutions. Comparisons with data will also be presented. These comparisons will further demonstrate the usefulness of the present work for they allow an estimate of the accuracy of the flow modeling equations independent of error attributed to numerical discretization.
Identification of Matra Region and Overlapping Characters for OCR of Printed Bengali Scripts
NASA Astrophysics Data System (ADS)
Goswami, Subhra Sundar
One of the important reasons for poor recognition rate in optical character recognition (OCR) system is the error in character segmentation. In case of Bangla scripts, the errors occur due to several reasons, which include incorrect detection of matra (headline), over-segmentation and under-segmentation. We have proposed a robust method for detecting the headline region. Existence of overlapping characters (in under-segmented parts) in scanned printed documents is a major problem in designing an effective character segmentation procedure for OCR systems. In this paper, a predictive algorithm is developed for effectively identifying overlapping characters and then selecting the cut-borders for segmentation. Our method can be successfully used in achieving high recognition result.
NASA Technical Reports Server (NTRS)
Nylen, W. E.
1974-01-01
Profile modification as a means of reducing ground level noise from jet aircraft in the landing approach is evaluated. A flight simulator was modified to incorporate the cockpit hardware which would be in the prototype airplane installation. The two-segment system operational and aircraft interface logic was accurately emulated in software. Programs were developed to permit data to be recorded in real time on the line printer, a 14-channel oscillograph, and an x-y plotter. The two-segment profile and procedures which were developed are described with emphasis on operational concepts and constraints. The two-segment system operational logic and the flight simulator capabilities are described. The findings influenced the ultimate system design and aircraft interface.
de Siqueira, Alexandre Fioravante; Cabrera, Flávio Camargo; Nakasuga, Wagner Massayuki; Pagamisse, Aylton; Job, Aldo Eloizo
2018-01-01
Image segmentation, the process of separating the elements within a picture, is frequently used for obtaining information from photomicrographs. Segmentation methods should be used with reservations, since incorrect results can mislead when interpreting regions of interest (ROI). This decreases the success rate of extra procedures. Multi-Level Starlet Segmentation (MLSS) and Multi-Level Starlet Optimal Segmentation (MLSOS) were developed to be an alternative for general segmentation tools. These methods gave rise to Jansen-MIDAS, an open-source software. A scientist can use it to obtain several segmentations of hers/his photomicrographs. It is a reliable alternative to process different types of photomicrographs: previous versions of Jansen-MIDAS were used to segment ROI in photomicrographs of two different materials, with an accuracy superior to 89%. © 2017 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Colwell, R. N. (Principal Investigator)
1977-01-01
The results and progress of work conducted in support of the Large Area Crop Inventory Experiment (LACIE) are documented. Research was conducted for two tasks. These tasks include: (1) evaluation of the UCB static stratification procedure and modification of that procedure if warranted; and (2) the development of alternative photointerpretive techniques to the present LACIE procedure for the identification and selection of training areas for machine-processing of LACIE segments.
Crew procedures and workload of retrofit concepts for microwave landing system
NASA Technical Reports Server (NTRS)
Summers, Leland G.; Jonsson, Jon E.
1989-01-01
Crew procedures and workload for Microwave Landing Systems (MLS) that could be retrofitted into existing transport aircraft were evaluated. Two MLS receiver concepts were developed. One is capable of capturing a runway centerline and the other is capable of capturing a segmented approach path. Crew procedures were identified and crew task analyses were performed using each concept. Crew workload comparisons were made between the MLS concepts and an ILS baseline using a task-timeline workload model. Workload indexes were obtained for each scenario. The results showed that workload was comparable to the ILS baseline for the MLS centerline capture concept, but significantly higher for the segmented path capture concept.
Development of OCR system for portable passport and visa reader
NASA Astrophysics Data System (ADS)
Visilter, Yury V.; Zheltov, Sergey Y.; Lukin, Anton A.
1999-01-01
The modern passport and visa documents include special machine-readable zones satisfied the ICAO standards. This allows to develop the special passport and visa automatic readers. However, there are some special problems in such OCR systems: low resolution of character images captured by CCD-camera (down to 150 dpi), essential shifts and slopes (up to 10 degrees), rich paper texture under the character symbols, non-homogeneous illumination. This paper presents the structure and some special aspects of OCR system for portable passport and visa reader. In our approach the binarization procedure is performed after the segmentation step, and it is applied to the each character site separately. Character recognition procedure uses the structural information of machine-readable zone. Special algorithms are developed for machine-readable zone extraction and character segmentation.
NASA Technical Reports Server (NTRS)
Magness, E. R. (Principal Investigator)
1980-01-01
The success of the Transition Year procedure to separate and label barley and the other small grains was assessed. It was decided that developers of the procedure would carry out the exercise in order to prevent compounding procedural problems with implementation problems. The evaluation proceeded by labeling the sping small grains first. The accuracy of this labeling was, on the average, somewhat better than that in the Transition Year operations. Other departures from the original procedure included a regionalization of the labeling process, the use of trend analysis, and the removal of time constraints from the actual processing. Segment selection, ground truth derivation, and data available for each segment in the analysis are discussed. Labeling accuracy is examined for North Dakota, South Dakota, Minnesota, and Montana as well as for the entire four-state area. Errors are characterized.
Radio Frequency Ablation Registration, Segmentation, and Fusion Tool
McCreedy, Evan S.; Cheng, Ruida; Hemler, Paul F.; Viswanathan, Anand; Wood, Bradford J.; McAuliffe, Matthew J.
2008-01-01
The Radio Frequency Ablation Segmentation Tool (RFAST) is a software application developed using NIH's Medical Image Processing Analysis and Visualization (MIPAV) API for the specific purpose of assisting physicians in the planning of radio frequency ablation (RFA) procedures. The RFAST application sequentially leads the physician through the steps necessary to register, fuse, segment, visualize and plan the RFA treatment. Three-dimensional volume visualization of the CT dataset with segmented 3D surface models enables the physician to interactively position the ablation probe to simulate burns and to semi-manually simulate sphere packing in an attempt to optimize probe placement. PMID:16871716
NASA Astrophysics Data System (ADS)
Lisitsa, Y. V.; Yatskou, M. M.; Apanasovich, V. V.; Apanasovich, T. V.
2015-09-01
We have developed an algorithm for segmentation of cancer cell nuclei in three-channel luminescent images of microbiological specimens. The algorithm is based on using a correlation between fluorescence signals in the detection channels for object segmentation, which permits complete automation of the data analysis procedure. We have carried out a comparative analysis of the proposed method and conventional algorithms implemented in the CellProfiler and ImageJ software packages. Our algorithm has an object localization uncertainty which is 2-3 times smaller than for the conventional algorithms, with comparable segmentation accuracy.
Calibration of 3D ultrasound to an electromagnetic tracking system
NASA Astrophysics Data System (ADS)
Lang, Andrew; Parthasarathy, Vijay; Jain, Ameet
2011-03-01
The use of electromagnetic (EM) tracking is an important guidance tool that can be used to aid procedures requiring accurate localization such as needle injections or catheter guidance. Using EM tracking, the information from different modalities can be easily combined using pre-procedural calibration information. These calibrations are performed individually, per modality, allowing different imaging systems to be mixed and matched according to the procedure at hand. In this work, a framework for the calibration of a 3D transesophageal echocardiography probe to EM tracking is developed. The complete calibration framework includes three required steps: data acquisition, needle segmentation, and calibration. Ultrasound (US) images of an EM tracked needle must be acquired with the position of the needles in each volume subsequently extracted by segmentation. The calibration transformation is determined through a registration between the segmented points and the recorded EM needle positions. Additionally, the speed of sound is compensated for since calibration is performed in water that has a different speed then is assumed by the US machine. A statistical validation framework has also been developed to provide further information related to the accuracy and consistency of the calibration. Further validation of the calibration showed an accuracy of 1.39 mm.
NASA Technical Reports Server (NTRS)
Whyte, W. A.; Heyward, A. O.; Ponchak, D. S.; Spence, R. L.; Zuzek, J. E.
1988-01-01
The Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) provides a method of generating predetermined arc segments for use in the development of an allotment planning procedure to be carried out at the 1988 World Administrative Radio Conference (WARC) on the Use of the Geostationary Satellite Orbit and the Planning of Space Services Utilizing It. Through careful selection of the predetermined arc (PDA) for each administration, flexibility can be increased in terms of choice of system technical characteristics and specific orbit location while reducing the need for coordination among administrations. The NASARC software determines pairwise compatibility between all possible service areas at discrete arc locations. NASARC then exhaustively enumerates groups of administrations whose satellites can be closely located in orbit, and finds the arc segment over which each such compatible group exists. From the set of all possible compatible groupings, groups and their associated arc segments are selected using a heuristic procedure such that a PDA is identified for each administration. Various aspects of the NASARC concept and how the software accomplishes specific features of allotment planning are discussed.
Kaneko, Hidekazu; Tamura, Hiroshi; Tate, Shunta; Kawashima, Takahiro; Suzuki, Shinya S; Fujita, Ichiro
2010-08-01
In order for patients with disabilities to control assistive devices with their own neural activity, multineuronal spike trains must be efficiently decoded because only limited computational resources can be used to generate prosthetic control signals in portable real-time applications. In this study, we compare the abilities of two vectorizing procedures (multineuronal and time-segmental) to extract information from spike trains during the same total neuron-seconds. In the multineuronal vectorizing procedure, we defined a response vector whose components represented the spike counts of one to five neurons. In the time-segmental vectorizing procedure, a response vector consisted of components representing a neuron's spike counts for one to five time-segment(s) of a response period of 1 s. Spike trains were recorded from neurons in the inferior temporal cortex of monkeys presented with visual stimuli. We examined whether the amount of information of the visual stimuli carried by these neurons differed between the two vectorizing procedures. The amount of information calculated with the multineuronal vectorizing procedure, but not the time-segmental vectorizing procedure, significantly increased with the dimensions of the response vector. We conclude that the multineuronal vectorizing procedure is superior to the time-segmental vectorizing procedure in efficiently extracting information from neuronal signals. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Scale-space for empty catheter segmentation in PCI fluoroscopic images.
Bacchuwar, Ketan; Cousty, Jean; Vaillant, Régis; Najman, Laurent
2017-07-01
In this article, we present a method for empty guiding catheter segmentation in fluoroscopic X-ray images. The guiding catheter, being a commonly visible landmark, its segmentation is an important and a difficult brick for Percutaneous Coronary Intervention (PCI) procedure modeling. In number of clinical situations, the catheter is empty and appears as a low contrasted structure with two parallel and partially disconnected edges. To segment it, we work on the level-set scale-space of image, the min tree, to extract curve blobs. We then propose a novel structural scale-space, a hierarchy built on these curve blobs. The deep connected component, i.e. the cluster of curve blobs on this hierarchy, that maximizes the likelihood to be an empty catheter is retained as final segmentation. We evaluate the performance of the algorithm on a database of 1250 fluoroscopic images from 6 patients. As a result, we obtain very good qualitative and quantitative segmentation performance, with mean precision and recall of 80.48 and 63.04% respectively. We develop a novel structural scale-space to segment a structured object, the empty catheter, in challenging situations where the information content is very sparse in the images. Fully-automatic empty catheter segmentation in X-ray fluoroscopic images is an important and preliminary step in PCI procedure modeling, as it aids in tagging the arrival and removal location of other interventional tools.
Automatic Nuclei Segmentation in H&E Stained Breast Cancer Histopathology Images
Veta, Mitko; van Diest, Paul J.; Kornegoor, Robert; Huisman, André; Viergever, Max A.; Pluim, Josien P. W.
2013-01-01
The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8. PMID:23922958
Automatic nuclei segmentation in H&E stained breast cancer histopathology images.
Veta, Mitko; van Diest, Paul J; Kornegoor, Robert; Huisman, André; Viergever, Max A; Pluim, Josien P W
2013-01-01
The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8.
Object Segmentation and Ground Truth in 3D Embryonic Imaging.
Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C
2016-01-01
Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.
Object Segmentation and Ground Truth in 3D Embryonic Imaging
Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.
2016-01-01
Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860
NASA Astrophysics Data System (ADS)
Daryanani, Aditya; Dangi, Shusil; Ben-Zikri, Yehuda Kfir; Linte, Cristian A.
2016-03-01
Magnetic Resonance Imaging (MRI) is a standard-of-care imaging modality for cardiac function assessment and guidance of cardiac interventions thanks to its high image quality and lack of exposure to ionizing radiation. Cardiac health parameters such as left ventricular volume, ejection fraction, myocardial mass, thickness, and strain can be assessed by segmenting the heart from cardiac MRI images. Furthermore, the segmented pre-operative anatomical heart models can be used to precisely identify regions of interest to be treated during minimally invasive therapy. Hence, the use of accurate and computationally efficient segmentation techniques is critical, especially for intra-procedural guidance applications that rely on the peri-operative segmentation of subject-specific datasets without delaying the procedure workflow. Atlas-based segmentation incorporates prior knowledge of the anatomy of interest from expertly annotated image datasets. Typically, the ground truth atlas label is propagated to a test image using a combination of global and local registration. The high computational cost of non-rigid registration motivated us to obtain an initial segmentation using global transformations based on an atlas of the left ventricle from a population of patient MRI images and refine it using well developed technique based on graph cuts. Here we quantitatively compare the segmentations obtained from the global and global plus local atlases and refined using graph cut-based techniques with the expert segmentations according to several similarity metrics, including Dice correlation coefficient, Jaccard coefficient, Hausdorff distance, and Mean absolute distance error.
A JOINT FRAMEWORK FOR 4D SEGMENTATION AND ESTIMATION OF SMOOTH TEMPORAL APPEARANCE CHANGES.
Gao, Yang; Prastawa, Marcel; Styner, Martin; Piven, Joseph; Gerig, Guido
2014-04-01
Medical imaging studies increasingly use longitudinal images of individual subjects in order to follow-up changes due to development, degeneration, disease progression or efficacy of therapeutic intervention. Repeated image data of individuals are highly correlated, and the strong causality of information over time lead to the development of procedures for joint segmentation of the series of scans, called 4D segmentation. A main aim was improved consistency of quantitative analysis, most often solved via patient-specific atlases. Challenging open problems are contrast changes and occurance of subclasses within tissue as observed in multimodal MRI of infant development, neurodegeneration and disease. This paper proposes a new 4D segmentation framework that enforces continuous dynamic changes of tissue contrast patterns over time as observed in such data. Moreover, our model includes the capability to segment different contrast patterns within a specific tissue class, for example as seen in myelinated and unmyelinated white matter regions in early brain development. Proof of concept is shown with validation on synthetic image data and with 4D segmentation of longitudinal, multimodal pediatric MRI taken at 6, 12 and 24 months of age, but the methodology is generic w.r.t. different application domains using serial imaging.
Evaluation of a segment-based LANDSAT full-frame approach to corp area estimation
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Hixson, M. M.; Davis, S. M.
1981-01-01
As the registration of LANDSAT full frames enters the realm of current technology, sampling methods should be examined which utilize other than the segment data used for LACIE. The effect of separating the functions of sampling for training and sampling for area estimation. The frame selected for analysis was acquired over north central Iowa on August 9, 1978. A stratification of he full-frame was defined. Training data came from segments within the frame. Two classification and estimation procedures were compared: statistics developed on one segment were used to classify that segment, and pooled statistics from the segments were used to classify a systematic sample of pixels. Comparisons to USDA/ESCS estimates illustrate that the full-frame sampling approach can provide accurate and precise area estimates.
Results of Large Area Crop Inventory Experiment (LACIE) drought analysis (South Dakota drought 1976)
NASA Technical Reports Server (NTRS)
Thompson, D. R.
1976-01-01
LACIE using techniques developed from the southern Great Plains drought analysis indicated the potential for drought damage in South Dakota. This potential was monitored and as it became apparent that a drought was developing, LACIE implemented some of the procedures used in the southern Great Plains drought. The technical approach used in South Dakota involved the normal use of LACIE sample segments (5 x 6 nm) every 18 days. Full frame color transparencies (100 x 100 nm) were used on 9 day intervals to identify the drought area and to track overtime. The green index number (GIN) developed using the Kauth transformation was computed for all South Dakota segments and selected North Dakota segments. A scheme for classifying segments as drought affected or not affected was devised and tested on all available 1976 South Dakota data. Yield model simulations were run for all CRD's Crop Reporting District) in South Dakota.
A semi-automated image analysis procedure for in situ plankton imaging systems.
Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M
2015-01-01
Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects (< 5%), and remove the non-target objects (> 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed by the procedure. The procedure was tested on 89,419 images collected in Chesapeake Bay, and results were consistent with visual counts with >80% accuracy for all three groups.
A Semi-Automated Image Analysis Procedure for In Situ Plankton Imaging Systems
Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C.; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M.
2015-01-01
Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects (< 5%), and remove the non-target objects (> 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed by the procedure. The procedure was tested on 89,419 images collected in Chesapeake Bay, and results were consistent with visual counts with >80% accuracy for all three groups. PMID:26010260
A web-based procedure for liver segmentation in CT images
NASA Astrophysics Data System (ADS)
Yuan, Rong; Luo, Ming; Wang, Luyao; Xie, Qingguo
2015-03-01
Liver segmentation in CT images has been acknowledged as a basic and indispensable part in systems of computer aided liver surgery for operation design and risk evaluation. In this paper, we will introduce and implement a web-based procedure for liver segmentation to help radiologists and surgeons get an accurate result efficiently and expediently. Several clinical datasets are used to evaluate the accessibility and the accuracy. This procedure seems a promising approach for extraction of liver volumetry of various shapes. Moreover, it is possible for user to access the segmentation wherever the Internet is available without any specific machine.
Integrating shape into an interactive segmentation framework
NASA Astrophysics Data System (ADS)
Kamalakannan, S.; Bryant, B.; Sari-Sarraf, H.; Long, R.; Antani, S.; Thoma, G.
2013-02-01
This paper presents a novel interactive annotation toolbox which extends a well-known user-steered segmentation framework, namely Intelligent Scissors (IS). IS, posed as a shortest path problem, is essentially driven by lower level image based features. All the higher level knowledge about the problem domain is obtained from the user through mouse clicks. The proposed work integrates one higher level feature, namely shape up to a rigid transform, into the IS framework, thus reducing the burden on the user and the subjectivity involved in the annotation procedure, especially during instances of occlusions, broken edges, noise and spurious boundaries. The above mentioned scenarios are commonplace in medical image annotation applications and, hence, such a tool will be of immense help to the medical community. As a first step, an offline training procedure is performed in which a mean shape and the corresponding shape variance is computed by registering training shapes up to a rigid transform in a level-set framework. The user starts the interactive segmentation procedure by providing a training segment, which is a part of the target boundary. A partial shape matching scheme based on a scale-invariant curvature signature is employed in order to extract shape correspondences and subsequently predict the shape of the unsegmented target boundary. A `zone of confidence' is generated for the predicted boundary to accommodate shape variations. The method is evaluated on segmentation of digital chest x-ray images for lung annotation which is a crucial step in developing algorithms for screening Tuberculosis.
Surgical motion characterization in simulated needle insertion procedures
NASA Astrophysics Data System (ADS)
Holden, Matthew S.; Ungi, Tamas; Sargent, Derek; McGraw, Robert C.; Fichtinger, Gabor
2012-02-01
PURPOSE: Evaluation of surgical performance in image-guided needle insertions is of emerging interest, to both promote patient safety and improve the efficiency and effectiveness of training. The purpose of this study was to determine if a Markov model-based algorithm can more accurately segment a needle-based surgical procedure into its five constituent tasks than a simple threshold-based algorithm. METHODS: Simulated needle trajectories were generated with known ground truth segmentation by a synthetic procedural data generator, with random noise added to each degree of freedom of motion. The respective learning algorithms were trained, and then tested on different procedures to determine task segmentation accuracy. In the threshold-based algorithm, a change in tasks was detected when the needle crossed a position/velocity threshold. In the Markov model-based algorithm, task segmentation was performed by identifying the sequence of Markov models most likely to have produced the series of observations. RESULTS: For amplitudes of translational noise greater than 0.01mm, the Markov model-based algorithm was significantly more accurate in task segmentation than the threshold-based algorithm (82.3% vs. 49.9%, p<0.001 for amplitude 10.0mm). For amplitudes less than 0.01mm, the two algorithms produced insignificantly different results. CONCLUSION: Task segmentation of simulated needle insertion procedures was improved by using a Markov model-based algorithm as opposed to a threshold-based algorithm for procedures involving translational noise.
Automated segmentation and tracking for large-scale analysis of focal adhesion dynamics.
Würflinger, T; Gamper, I; Aach, T; Sechi, A S
2011-01-01
Cell adhesion, a process mediated by the formation of discrete structures known as focal adhesions (FAs), is pivotal to many biological events including cell motility. Much is known about the molecular composition of FAs, although our knowledge of the spatio-temporal recruitment and the relative occupancy of the individual components present in the FAs is still incomplete. To fill this gap, an essential prerequisite is a highly reliable procedure for the recognition, segmentation and tracking of FAs. Although manual segmentation and tracking may provide some advantages when done by an expert, its performance is usually hampered by subjective judgement and the long time required in analysing large data sets. Here, we developed a model-based segmentation and tracking algorithm that overcomes these problems. In addition, we developed a dedicated computational approach to correct segmentation errors that may arise from the analysis of poorly defined FAs. Thus, by achieving accurate and consistent FA segmentation and tracking, our work establishes the basis for a comprehensive analysis of FA dynamics under various experimental regimes and the future development of mathematical models that simulate FA behaviour. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.
Boccardi, Marina; Bocchetta, Martina; Apostolova, Liana G.; Barnes, Josephine; Bartzokis, George; Corbetta, Gabriele; DeCarli, Charles; deToledo-Morrell, Leyla; Firbank, Michael; Ganzola, Rossana; Gerritsen, Lotte; Henneman, Wouter; Killiany, Ronald J.; Malykhin, Nikolai; Pasqualetti, Patrizio; Pruessner, Jens C.; Redolfi, Alberto; Robitaille, Nicolas; Soininen, Hilkka; Tolomeo, Daniele; Wang, Lei; Watson, Craig; Wolf, Henrike; Duvernoy, Henri; Duchesne, Simon; Jack, Clifford R.; Frisoni, Giovanni B.
2015-01-01
Background This study aimed to have international experts converge on a harmonized definition of whole hippocampus boundaries and segmentation procedures, to define standard operating procedures for magnetic resonance (MR)-based manual hippocampal segmentation. Methods The panel received a questionnaire regarding whole hippocampus boundaries and segmentation procedures. Quantitative information was supplied to allow evidence-based answers. A recursive and anonymous Delphi procedure was used to achieve convergence. Significance of agreement among panelists was assessed by exact probability on Fisher’s and binomial tests. Results Agreement was significant on the inclusion of alveus/fimbria (P =.021), whole hippocampal tail (P =.013), medial border of the body according to visible morphology (P =.0006), and on this combined set of features (P =.001). This definition captures 100% of hippocampal tissue, 100% of Alzheimer’s disease-related atrophy, and demonstrated good reliability on preliminary intrarater (0.98) and inter-rater (0.94) estimates. Discussion Consensus was achieved among international experts with respect to hippocampal segmentation using MR resulting in a harmonized segmentation protocol. PMID:25130658
Boccardi, Marina; Bocchetta, Martina; Apostolova, Liana G; Barnes, Josephine; Bartzokis, George; Corbetta, Gabriele; DeCarli, Charles; deToledo-Morrell, Leyla; Firbank, Michael; Ganzola, Rossana; Gerritsen, Lotte; Henneman, Wouter; Killiany, Ronald J; Malykhin, Nikolai; Pasqualetti, Patrizio; Pruessner, Jens C; Redolfi, Alberto; Robitaille, Nicolas; Soininen, Hilkka; Tolomeo, Daniele; Wang, Lei; Watson, Craig; Wolf, Henrike; Duvernoy, Henri; Duchesne, Simon; Jack, Clifford R; Frisoni, Giovanni B
2015-02-01
This study aimed to have international experts converge on a harmonized definition of whole hippocampus boundaries and segmentation procedures, to define standard operating procedures for magnetic resonance (MR)-based manual hippocampal segmentation. The panel received a questionnaire regarding whole hippocampus boundaries and segmentation procedures. Quantitative information was supplied to allow evidence-based answers. A recursive and anonymous Delphi procedure was used to achieve convergence. Significance of agreement among panelists was assessed by exact probability on Fisher's and binomial tests. Agreement was significant on the inclusion of alveus/fimbria (P = .021), whole hippocampal tail (P = .013), medial border of the body according to visible morphology (P = .0006), and on this combined set of features (P = .001). This definition captures 100% of hippocampal tissue, 100% of Alzheimer's disease-related atrophy, and demonstrated good reliability on preliminary intrarater (0.98) and inter-rater (0.94) estimates. Consensus was achieved among international experts with respect to hippocampal segmentation using MR resulting in a harmonized segmentation protocol. Copyright © 2015 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Developing a Procedure for Segmenting Meshed Heat Networks of Heat Supply Systems without Outflows
NASA Astrophysics Data System (ADS)
Tokarev, V. V.
2018-06-01
The heat supply systems of cities have, as a rule, a ring structure with the possibility of redistributing the flows. Despite the fact that a ring structure is more reliable than a radial one, the operators of heat networks prefer to use them in normal modes according to the scheme without overflows of the heat carrier between the heat mains. With such a scheme, it is easier to adjust the networks and to detect and locate faults in them. The article proposes a formulation of the heat network segmenting problem. The problem is set in terms of optimization with the heat supply system's excessive hydraulic power used as the optimization criterion. The heat supply system computer model has a hierarchically interconnected multilevel structure. Since iterative calculations are only carried out for the level of trunk heat networks, decomposing the entire system into levels allows the dimensionality of the solved subproblems to be reduced by an order of magnitude. An attempt to solve the problem by fully enumerating possible segmentation versions does not seem to be feasible for systems of really existing sizes. The article suggests a procedure for searching rational segmentation of heat supply networks with limiting the search to versions of dividing the system into segments near the flow convergence nodes with subsequent refining of the solution. The refinement is performed in two stages according to the total excess hydraulic power criterion. At the first stage, the loads are redistributed among the sources. After that, the heat networks are divided into independent fragments, and the possibility of increasing the excess hydraulic power in the obtained fragments is checked by shifting the division places inside a fragment. The proposed procedure has been approbated taking as an example a municipal heat supply system involving six heat mains fed from a common source, 24 loops within the feeding mains plane, and more than 5000 consumers. Application of the proposed segmentation procedure made it possible to find a version with required hydraulic power in the heat supply system on 3% less than the one found using the simultaneous segmentation method.
14 CFR 97.3 - Symbols and terms used in procedures.
Code of Federal Regulations, 2010 CFR
2010-01-01
... established on the intermediate course or final approach course. (2) Initial approach altitude is the altitude (or altitudes, in high altitude procedure) prescribed for the initial approach segment of an...: Speed 166 knots or more. Approach procedure segments for which altitudes (minimum altitudes, unless...
14 CFR 97.3 - Symbols and terms used in procedures.
Code of Federal Regulations, 2011 CFR
2011-01-01
... established on the intermediate course or final approach course. (2) Initial approach altitude is the altitude (or altitudes, in high altitude procedure) prescribed for the initial approach segment of an...: Speed 166 knots or more. Approach procedure segments for which altitudes (minimum altitudes, unless...
Comparison of in vivo 3D cone-beam computed tomography tooth volume measurement protocols.
Forst, Darren; Nijjar, Simrit; Flores-Mir, Carlos; Carey, Jason; Secanell, Marc; Lagravere, Manuel
2014-12-23
The objective of this study is to analyze a set of previously developed and proposed image segmentation protocols for precision in both intra- and inter-rater reliability for in vivo tooth volume measurements using cone-beam computed tomography (CBCT) images. Six 3D volume segmentation procedures were proposed and tested for intra- and inter-rater reliability to quantify maxillary first molar volumes. Ten randomly selected maxillary first molars were measured in vivo in random order three times with 10 days separation between measurements. Intra- and inter-rater agreement for all segmentation procedures was attained using intra-class correlation coefficient (ICC). The highest precision was for automated thresholding with manual refinements. A tooth volume measurement protocol for CBCT images employing automated segmentation with manual human refinement on a 2D slice-by-slice basis in all three planes of space possessed excellent intra- and inter-rater reliability. Three-dimensional volume measurements of the entire tooth structure are more precise than 3D volume measurements of only the dental roots apical to the cemento-enamel junction (CEJ).
New software tools for enhanced precision in robot-assisted laser phonomicrosurgery.
Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G
2012-01-01
This paper describes a new software package created to enhance precision during robot-assisted laser phonomicrosurgery procedures. The new software is composed of three tools for camera calibration, automatic tumor segmentation, and laser tracking. These were designed and developed to improve the outcome of this demanding microsurgical technique, and were tested herein to produce quantitative performance data. The experimental setup was based on the motorized laser micromanipulator created by Istituto Italiano di Tecnologia and the experimental protocols followed are fully described in this paper. The results show the new tools are robust and effective: The camera calibration tool reduced residual errors (RMSE) to 0.009 ± 0.002 mm under 40× microscope magnification; the automatic tumor segmentation tool resulted in deep lesion segmentations comparable to manual segmentations (RMSE= 0.160 ± 0.028 mm under 40× magnification); and the laser tracker tool proved to be reliable even during cutting procedures (RMSE= 0.073 ± 0.023 mm under 40× magnification). These results demonstrate the new software package can provide excellent improvements to the previous microsurgical system, leading to important enhancements in surgical outcome.
Development of a novel 2D color map for interactive segmentation of histological images.
Chaudry, Qaiser; Sharma, Yachna; Raza, Syed H; Wang, May D
2012-05-01
We present a color segmentation approach based on a two-dimensional color map derived from the input image. Pathologists stain tissue biopsies with various colored dyes to see the expression of biomarkers. In these images, because of color variation due to inconsistencies in experimental procedures and lighting conditions, the segmentation used to analyze biological features is usually ad-hoc. Many algorithms like K-means use a single metric to segment the image into different color classes and rarely provide users with powerful color control. Our 2D color map interactive segmentation technique based on human color perception information and the color distribution of the input image, enables user control without noticeable delay. Our methodology works for different staining types and different types of cancer tissue images. Our proposed method's results show good accuracy with low response and computational time making it a feasible method for user interactive applications involving segmentation of histological images.
Brun, E; Grandl, S; Sztrókay-Gaul, A; Barbone, G; Mittone, A; Gasilov, S; Bravin, A; Coan, P
2014-11-01
Phase contrast computed tomography has emerged as an imaging method, which is able to outperform present day clinical mammography in breast tumor visualization while maintaining an equivalent average dose. To this day, no segmentation technique takes into account the specificity of the phase contrast signal. In this study, the authors propose a new mathematical framework for human-guided breast tumor segmentation. This method has been applied to high-resolution images of excised human organs, each of several gigabytes. The authors present a segmentation procedure based on the viscous watershed transform and demonstrate the efficacy of this method on analyzer based phase contrast images. The segmentation of tumors inside two full human breasts is then shown as an example of this procedure's possible applications. A correct and precise identification of the tumor boundaries was obtained and confirmed by manual contouring performed independently by four experienced radiologists. The authors demonstrate that applying the watershed viscous transform allows them to perform the segmentation of tumors in high-resolution x-ray analyzer based phase contrast breast computed tomography images. Combining the additional information provided by the segmentation procedure with the already high definition of morphological details and tissue boundaries offered by phase contrast imaging techniques, will represent a valuable multistep procedure to be used in future medical diagnostic applications.
Markov models of genome segmentation
NASA Astrophysics Data System (ADS)
Thakur, Vivek; Azad, Rajeev K.; Ramaswamy, Ram
2007-01-01
We introduce Markov models for segmentation of symbolic sequences, extending a segmentation procedure based on the Jensen-Shannon divergence that has been introduced earlier. Higher-order Markov models are more sensitive to the details of local patterns and in application to genome analysis, this makes it possible to segment a sequence at positions that are biologically meaningful. We show the advantage of higher-order Markov-model-based segmentation procedures in detecting compositional inhomogeneity in chimeric DNA sequences constructed from genomes of diverse species, and in application to the E. coli K12 genome, boundaries of genomic islands, cryptic prophages, and horizontally acquired regions are accurately identified.
A robust hidden Markov Gauss mixture vector quantizer for a noisy source.
Pyun, Kyungsuk Peter; Lim, Johan; Gray, Robert M
2009-07-01
Noise is ubiquitous in real life and changes image acquisition, communication, and processing characteristics in an uncontrolled manner. Gaussian noise and Salt and Pepper noise, in particular, are prevalent in noisy communication channels, camera and scanner sensors, and medical MRI images. It is not unusual for highly sophisticated image processing algorithms developed for clean images to malfunction when used on noisy images. For example, hidden Markov Gauss mixture models (HMGMM) have been shown to perform well in image segmentation applications, but they are quite sensitive to image noise. We propose a modified HMGMM procedure specifically designed to improve performance in the presence of noise. The key feature of the proposed procedure is the adjustment of covariance matrices in Gauss mixture vector quantizer codebooks to minimize an overall minimum discrimination information distortion (MDI). In adjusting covariance matrices, we expand or shrink their elements based on the noisy image. While most results reported in the literature assume a particular noise type, we propose a framework without assuming particular noise characteristics. Without denoising the corrupted source, we apply our method directly to the segmentation of noisy sources. We apply the proposed procedure to the segmentation of aerial images with Salt and Pepper noise and with independent Gaussian noise, and we compare our results with those of the median filter restoration method and the blind deconvolution-based method, respectively. We show that our procedure has better performance than image restoration-based techniques and closely matches to the performance of HMGMM for clean images in terms of both visual segmentation results and error rate.
Byrne, N; Velasco Forte, M; Tandon, A; Valverde, I; Hussain, T
2016-01-01
Shortcomings in existing methods of image segmentation preclude the widespread adoption of patient-specific 3D printing as a routine decision-making tool in the care of those with congenital heart disease. We sought to determine the range of cardiovascular segmentation methods and how long each of these methods takes. A systematic review of literature was undertaken. Medical imaging modality, segmentation methods, segmentation time, segmentation descriptive quality (SDQ) and segmentation software were recorded. Totally 136 studies met the inclusion criteria (1 clinical trial; 80 journal articles; 55 conference, technical and case reports). The most frequently used image segmentation methods were brightness thresholding, region growing and manual editing, as supported by the most popular piece of proprietary software: Mimics (Materialise NV, Leuven, Belgium, 1992-2015). The use of bespoke software developed by individual authors was not uncommon. SDQ indicated that reporting of image segmentation methods was generally poor with only one in three accounts providing sufficient detail for their procedure to be reproduced. Predominantly anecdotal and case reporting precluded rigorous assessment of risk of bias and strength of evidence. This review finds a reliance on manual and semi-automated segmentation methods which demand a high level of expertise and a significant time commitment on the part of the operator. In light of the findings, we have made recommendations regarding reporting of 3D printing studies. We anticipate that these findings will encourage the development of advanced image segmentation methods.
Blurry-frame detection and shot segmentation in colonoscopy videos
NASA Astrophysics Data System (ADS)
Oh, JungHwan; Hwang, Sae; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny
2003-12-01
Colonoscopy is an important screening procedure for colorectal cancer. During this procedure, the endoscopist visually inspects the colon. Human inspection, however, is not without error. We hypothesize that colonoscopy videos may contain additional valuable information missed by the endoscopist. Video segmentation is the first necessary step for the content-based video analysis and retrieval to provide efficient access to the important images and video segments from a large colonoscopy video database. Based on the unique characteristics of colonoscopy videos, we introduce a new scheme to detect and remove blurry frames, and segment the videos into shots based on the contents. Our experimental results show that the average precision and recall of the proposed scheme are over 90% for the detection of non-blurry images. The proposed method of blurry frame detection and shot segmentation is extensible to the videos captured from other endoscopic procedures such as upper gastrointestinal endoscopy, enteroscopy, cystoscopy, and laparoscopy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tewari, Sanjit O., E-mail: tewaris@mskcc.org; Petre, Elena N., E-mail: petree@mskcc.org; Osborne, Joseph, E-mail: osbornej@mskcc.org
2013-12-15
A 68-year-old female with colorectal cancer developed a metachronous isolated fluorodeoxyglucose-avid (FDG-avid) segment 5/6 gallbladder fossa hepatic lesion and was referred for percutaneous ablation. Pre-procedure computed tomography (CT) images demonstrated a distended gallbladder abutting the segment 5/6 hepatic metastasis. In order to perform ablation with clear margins and avoid direct puncture and aspiration of the gallbladder, cholecystokinin was administered intravenously to stimulate gallbladder contraction before hydrodissection. Subsequently, the lesion was ablated successfully with sufficient margins, of greater than 1.0 cm, using microwave with ultrasound and FDG PET/CT guidance. The patient tolerated the procedure very well and was discharged home themore » next day.« less
Statistical Signal Models and Algorithms for Image Analysis
1984-10-25
In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction
A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.
Guo, Shengwen; Fei, Baowei
2009-03-27
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.
A minimal path searching approach for active shape model (ASM)-based segmentation of the lung
NASA Astrophysics Data System (ADS)
Guo, Shengwen; Fei, Baowei
2009-02-01
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.
A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung
Guo, Shengwen; Fei, Baowei
2013-01-01
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs. PMID:24386531
FROM2D to 3d Supervised Segmentation and Classification for Cultural Heritage Applications
NASA Astrophysics Data System (ADS)
Grilli, E.; Dininno, D.; Petrucci, G.; Remondino, F.
2018-05-01
The digital management of architectural heritage information is still a complex problem, as a heritage object requires an integrated representation of various types of information in order to develop appropriate restoration or conservation strategies. Currently, there is extensive research focused on automatic procedures of segmentation and classification of 3D point clouds or meshes, which can accelerate the study of a monument and integrate it with heterogeneous information and attributes, useful to characterize and describe the surveyed object. The aim of this study is to propose an optimal, repeatable and reliable procedure to manage various types of 3D surveying data and associate them with heterogeneous information and attributes to characterize and describe the surveyed object. In particular, this paper presents an approach for classifying 3D heritage models, starting from the segmentation of their textures based on supervised machine learning methods. Experimental results run on three different case studies demonstrate that the proposed approach is effective and with many further potentials.
An Image Segmentation Based on a Genetic Algorithm for Determining Soil Coverage by Crop Residues
Ribeiro, Angela; Ranz, Juan; Burgos-Artizzu, Xavier P.; Pajares, Gonzalo; Sanchez del Arco, Maria J.; Navarrete, Luis
2011-01-01
Determination of the soil coverage by crop residues after ploughing is a fundamental element of Conservation Agriculture. This paper presents the application of genetic algorithms employed during the fine tuning of the segmentation process of a digital image with the aim of automatically quantifying the residue coverage. In other words, the objective is to achieve a segmentation that would permit the discrimination of the texture of the residue so that the output of the segmentation process is a binary image in which residue zones are isolated from the rest. The RGB images used come from a sample of images in which sections of terrain were photographed with a conventional camera positioned in zenith orientation atop a tripod. The images were taken outdoors under uncontrolled lighting conditions. Up to 92% similarity was achieved between the images obtained by the segmentation process proposed in this paper and the templates made by an elaborate manual tracing process. In addition to the proposed segmentation procedure and the fine tuning procedure that was developed, a global quantification of the soil coverage by residues for the sampled area was achieved that differed by only 0.85% from the quantification obtained using template images. Moreover, the proposed method does not depend on the type of residue present in the image. The study was conducted at the experimental farm “El Encín” in Alcalá de Henares (Madrid, Spain). PMID:22163966
Brodic, Darko; Milivojevic, Dragan R.; Milivojevic, Zoran N.
2011-01-01
The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures. PMID:22164106
Brodic, Darko; Milivojevic, Dragan R; Milivojevic, Zoran N
2011-01-01
The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.
Sanjuán, Ana; Price, Cathy J.; Mancini, Laura; Josse, Goulven; Grogan, Alice; Yamamoto, Adam K.; Geva, Sharon; Leff, Alex P.; Yousry, Tarek A.; Seghier, Mohamed L.
2013-01-01
Brain tumors can have different shapes or locations, making their identification very challenging. In functional MRI, it is not unusual that patients have only one anatomical image due to time and financial constraints. Here, we provide a modified automatic lesion identification (ALI) procedure which enables brain tumor identification from single MR images. Our method rests on (A) a modified segmentation-normalization procedure with an explicit “extra prior” for the tumor and (B) an outlier detection procedure for abnormal voxel (i.e., tumor) classification. To minimize tissue misclassification, the segmentation-normalization procedure requires prior information of the tumor location and extent. We therefore propose that ALI is run iteratively so that the output of Step B is used as a patient-specific prior in Step A. We test this procedure on real T1-weighted images from 18 patients, and the results were validated in comparison to two independent observers' manual tracings. The automated procedure identified the tumors successfully with an excellent agreement with the manual segmentation (area under the ROC curve = 0.97 ± 0.03). The proposed procedure increases the flexibility and robustness of the ALI tool and will be particularly useful for lesion-behavior mapping studies, or when lesion identification and/or spatial normalization are problematic. PMID:24381535
Nishiuchi, Yuji; Inui, Tatsuya; Nishio, Hideki; Bódi, József; Kimura, Terutoshi; Tsuji, Frederick I.; Sakakibara, Shumpei
1998-01-01
The present paper describes the total chemical synthesis of the precursor molecule of the Aequorea green fluorescent protein (GFP). The molecule is made up of 238 amino acid residues in a single polypeptide chain and is nonfluorescent. To carry out the synthesis, a procedure, first described in 1981 for the synthesis of complex peptides, was used. The procedure is based on performing segment condensation reactions in solution while providing maximum protection to the segment. The effectiveness of the procedure has been demonstrated by the synthesis of various biologically active peptides and small proteins, such as human angiogenin, a 123-residue protein analogue of ribonuclease A, human midkine, a 121-residue protein, and pleiotrophin, a 136-residue protein analogue of midkine. The GFP precursor molecule was synthesized from 26 fully protected segments in solution, and the final 238-residue peptide was treated with anhydrous hydrogen fluoride to obtain the precursor molecule of GFP containing two Cys(acetamidomethyl) residues. After removal of the acetamidomethyl groups, the product was dissolved in 0.1 M Tris⋅HCl buffer (pH 8.0) in the presence of DTT. After several hours at room temperature, the solution began to emit a green fluorescence (λmax = 509 nm) under near-UV light. Both fluorescence excitation and fluorescence emission spectra were measured and were found to have the same shape and maxima as those reported for native GFP. The present results demonstrate the utility of the segment condensation procedure in synthesizing large protein molecules such as GFP. The result also provides evidence that the formation of the chromophore in GFP is not dependent on any external cofactor. PMID:9811837
NASA Astrophysics Data System (ADS)
Dahdouh, S.; Varsier, N.; Nunez Ochoa, M. A.; Wiart, J.; Peyman, A.; Bloch, I.
2016-02-01
Numerical dosimetry studies require the development of accurate numerical 3D models of the human body. This paper proposes a novel method for building 3D heterogeneous young children models combining results obtained from a semi-automatic multi-organ segmentation algorithm and an anatomy deformation method. The data consist of 3D magnetic resonance images, which are first segmented to obtain a set of initial tissues. A deformation procedure guided by the segmentation results is then developed in order to obtain five young children models ranging from the age of 5 to 37 months. By constraining the deformation of an older child model toward a younger one using segmentation results, we assure the anatomical realism of the models. Using the proposed framework, five models, containing thirteen tissues, are built. Three of these models are used in a prospective dosimetry study to analyze young child exposure to radiofrequency electromagnetic fields. The results lean to show the existence of a relationship between age and whole body exposure. The results also highlight the necessity to specifically study and develop measurements of child tissues dielectric properties.
Procedure for curve warning signing, delineation, and advisory speeds for horizontal curves.
DOT National Transportation Integrated Search
2010-09-30
Horizontal curves are relatively dangerous features, with collision rates at least 1.5 times that of comparable tangent : sections on average. To help make these segments safer, this research developed consistent study methods with : which field pers...
Automatic tracking of laparoscopic instruments for autonomous control of a cameraman robot.
Khoiy, Keyvan Amini; Mirbagheri, Alireza; Farahmand, Farzam
2016-01-01
An automated instrument tracking procedure was designed and developed for autonomous control of a cameraman robot during laparoscopic surgery. The procedure was based on an innovative marker-free segmentation algorithm for detecting the tip of the surgical instruments in laparoscopic images. A compound measure of Saturation and Value components of HSV color space was incorporated that was enhanced further using the Hue component and some essential characteristics of the instrument segment, e.g., crossing the image boundaries. The procedure was then integrated into the controlling system of the RoboLens cameraman robot, within a triple-thread parallel processing scheme, such that the tip is always kept at the center of the image. Assessment of the performance of the system on prerecorded real surgery movies revealed an accuracy rate of 97% for high quality images and about 80% for those suffering from poor lighting and/or blood, water and smoke noises. A reasonably satisfying performance was also observed when employing the system for autonomous control of the robot in a laparoscopic surgery phantom, with a mean time delay of 200ms. It was concluded that with further developments, the proposed procedure can provide a practical solution for autonomous control of cameraman robots during laparoscopic surgery operations.
D Modelling and Rapid Prototyping for Cardiovascular Surgical Planning - Two Case Studies
NASA Astrophysics Data System (ADS)
Nocerino, E.; Remondino, F.; Uccheddu, F.; Gallo, M.; Gerosa, G.
2016-06-01
In the last years, cardiovascular diagnosis, surgical planning and intervention have taken advantages from 3D modelling and rapid prototyping techniques. The starting data for the whole process is represented by medical imagery, in particular, but not exclusively, computed tomography (CT) or multi-slice CT (MCT) and magnetic resonance imaging (MRI). On the medical imagery, regions of interest, i.e. heart chambers, valves, aorta, coronary vessels, etc., are segmented and converted into 3D models, which can be finally converted in physical replicas through 3D printing procedure. In this work, an overview on modern approaches for automatic and semiautomatic segmentation of medical imagery for 3D surface model generation is provided. The issue of accuracy check of surface models is also addressed, together with the critical aspects of converting digital models into physical replicas through 3D printing techniques. A patient-specific 3D modelling and printing procedure (Figure 1), for surgical planning in case of complex heart diseases was developed. The procedure was applied to two case studies, for which MCT scans of the chest are available. In the article, a detailed description on the implemented patient-specific modelling procedure is provided, along with a general discussion on the potentiality and future developments of personalized 3D modelling and printing for surgical planning and surgeons practice.
Optimal reinforcement of training datasets in semi-supervised landmark-based segmentation
NASA Astrophysics Data System (ADS)
Ibragimov, Bulat; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž
2015-03-01
During the last couple of decades, the development of computerized image segmentation shifted from unsupervised to supervised methods, which made segmentation results more accurate and robust. However, the main disadvantage of supervised segmentation is a need for manual image annotation that is time-consuming and subjected to human error. To reduce the need for manual annotation, we propose a novel learning approach for training dataset reinforcement in the area of landmark-based segmentation, where newly detected landmarks are optimally combined with reference landmarks from the training dataset and therefore enriches the training process. The approach is formulated as a nonlinear optimization problem, where the solution is a vector of weighting factors that measures how reliable are the detected landmarks. The detected landmarks that are found to be more reliable are included into the training procedure with higher weighting factors, whereas the detected landmarks that are found to be less reliable are included with lower weighting factors. The approach is integrated into the landmark-based game-theoretic segmentation framework and validated against the problem of lung field segmentation from chest radiographs.
Deep residual networks for automatic segmentation of laparoscopic videos of the liver
NASA Astrophysics Data System (ADS)
Gibson, Eli; Robu, Maria R.; Thompson, Stephen; Edwards, P. Eddie; Schneider, Crispin; Gurusamy, Kurinchi; Davidson, Brian; Hawkes, David J.; Barratt, Dean C.; Clarkson, Matthew J.
2017-03-01
Motivation: For primary and metastatic liver cancer patients undergoing liver resection, a laparoscopic approach can reduce recovery times and morbidity while offering equivalent curative results; however, only about 10% of tumours reside in anatomical locations that are currently accessible for laparoscopic resection. Augmenting laparoscopic video with registered vascular anatomical models from pre-procedure imaging could support using laparoscopy in a wider population. Segmentation of liver tissue on laparoscopic video supports the robust registration of anatomical liver models by filtering out false anatomical correspondences between pre-procedure and intra-procedure images. In this paper, we present a convolutional neural network (CNN) approach to liver segmentation in laparoscopic liver procedure videos. Method: We defined a CNN architecture comprising fully-convolutional deep residual networks with multi-resolution loss functions. The CNN was trained in a leave-one-patient-out cross-validation on 2050 video frames from 6 liver resections and 7 laparoscopic staging procedures, and evaluated using the Dice score. Results: The CNN yielded segmentations with Dice scores >=0.95 for the majority of images; however, the inter-patient variability in median Dice score was substantial. Four failure modes were identified from low scoring segmentations: minimal visible liver tissue, inter-patient variability in liver appearance, automatic exposure correction, and pathological liver tissue that mimics non-liver tissue appearance. Conclusion: CNNs offer a feasible approach for accurately segmenting liver from other anatomy on laparoscopic video, but additional data or computational advances are necessary to address challenges due to the high inter-patient variability in liver appearance.
Applicability of NASA (ARC) two-segment approach procedures to Boeing Aircraft
NASA Technical Reports Server (NTRS)
Allison, R. L.
1974-01-01
An engineering study to determine the feasibility of applying the NASA (ARC) two-segment approach procedures and avionics to the Boeing fleet of commercial jet transports is presented. This feasibility study is concerned with the speed/path control and systems compability aspects of the procedures. Path performance data are provided for representative Boeing 707/727/737/747 passenger models. Thrust margin requirements for speed/path control are analyzed for still air and shearing tailwind conditions. Certification of the two-segment equipment and possible effects on existing airplane certification are discussed. Operational restrictions on use of the procedures with current autothrottles and in icing or reported tailwind conditions are recommended. Using the NASA/UAL 727 procedures as a baseline, maximum upper glide slopes for representative 707/727/737/747 models are defined as a starting point for further study and/or flight evaluation programs.
Segmenting Bone Parts for Bone Age Assessment using Point Distribution Model and Contour Modelling
NASA Astrophysics Data System (ADS)
Kaur, Amandeep; Singh Mann, Kulwinder, Dr.
2018-01-01
Bone age assessment (BAA) is a task performed on radiographs by the pediatricians in hospitals to predict the final adult height, to diagnose growth disorders by monitoring skeletal development. For building an automatic bone age assessment system the step in routine is to do image pre-processing of the bone X-rays so that features row can be constructed. In this research paper, an enhanced point distribution algorithm using contours has been implemented for segmenting bone parts as per well-established procedure of bone age assessment that would be helpful in building feature row and later on; it would be helpful in construction of automatic bone age assessment system. Implementation of the segmentation algorithm shows high degree of accuracy in terms of recall and precision in segmenting bone parts from left hand X-Rays.
Van't Hof, Arnoud; Giannini, Francesco; Ten Berg, Jurrien; Tolsma, Rudolf; Clemmensen, Peter; Bernstein, Debra; Coste, Pierre; Goldstein, Patrick; Zeymer, Uwe; Hamm, Christian; Deliargyris, Efthymios; Steg, Philippe G
2017-08-01
Myocardial reperfusion after primary percutaneous coronary intervention (PCI) can be assessed by the extent of post-procedural ST-segment resolution. The European Ambulance Acute Coronary Syndrome Angiography (EUROMAX) trial compared pre-hospital bivalirudin and pre-hospital heparin or enoxaparin with or without GPIIb/IIIa inhibitors (GPIs) in primary PCI. This nested substudy was performed in centres routinely using pre-hospital GPI in order to compare the impact of randomized treatments on ST-resolution after primary PCI. Residual cumulative ST-segment deviation on the single one hour post-procedure electrocardiogram (ECG) was assessed by an independent core laboratory and was the primary endpoint. It was calculated that 762 evaluable patients were needed to show non-inferiority (85% power, alpha 2.5%) between randomized treatments. A total of 871 participated with electrocardiographic data available in 824 patients (95%). Residual ST-segment deviation one hour after PCI was 3.8±4.9 mm versus 3.9±5.2 mm for bivalirudin and heparin+GPI, respectively ( p=0.0019 for non-inferiority). Overall, there were no differences between randomized treatments in any measures of ST-segment resolution either before or after the index procedure. Pre-hospital treatment with bivalirudin is non-inferior to pre-hospital heparin + GPI with regard to residual ST-segment deviation or ST-segment resolution, reflecting comparable myocardial reperfusion with the two strategies.
Recuperator assembly and procedures
Kang, Yungmo; McKeirnan, Jr., Robert D.
2008-08-26
A construction of recuperator core segments is provided which insures proper assembly of the components of the recuperator core segment, and of a plurality of recuperator core segments. Each recuperator core segment must be constructed so as to prevent nesting of fin folds of the adjacent heat exchanger foils of the recuperator core segment. A plurality of recuperator core segments must be assembled together so as to prevent nesting of adjacent fin folds of adjacent recuperator core segments.
Gupta, T C
2007-08-01
A 15 degrees of freedom lumped parameter vibratory model of human body is developed, for vertical mode vibrations, using anthropometric data of the 50th percentile US male. The mass and stiffness of various segments are determined from the elastic modulii of bones and tissues and from the anthropometric data available, assuming the shape of all the segments is ellipsoidal. The damping ratio of each segment is estimated on the basis of the physical structure of the body in a particular posture. Damping constants of various segments are calculated from these damping ratios. The human body is modeled as a linear spring-mass-damper system. The optimal values of the damping ratios of the body segments are estimated, for the 15 degrees of freedom model of the 50th percentile US male, by comparing the response of the model with the experimental response. Formulating a similar vibratory model of the 50th percentile Indian male and comparing the frequency response of the model with the experimental response of the same group of subjects validate the modeling procedure. A range of damping ratios has been considered to develop a vibratory model, which can predict the vertical harmonic response of the human body.
Coskunseven, Efekan; Jankov, Mirko R; Grentzelos, Michael A; Plaka, Argyro D; Limnopoulou, Aliki N; Kymionis, George D
2013-01-01
To present the results of topography-guided transepithelial photorefractive keratectomy (PRK) after intracorneal ring segments implantation followed by corneal collagen cross-linking (CXL) for keratoconus. In this prospective case series, 10 patients (16 eyes) with progressive keratoconus were included. All patients underwent topography-guided transepithelial PRK after Keraring intracorneal ring segments (Mediphacos Ltda) implantation, followed by CXL treatment. The follow-up period was 6 months after the last procedure for all patients. Time interval between both intracorneal ring segments implantation and CXL and between CXL and topography-guided transepithelial PRK was 6 months. LogMAR mean uncorrected distance visual acuity and mean corrected distance visual acuity were significantly improved (P<.05) from 1.14±0.36 and 0.75±0.24 preoperatively to 0.25±0.13 and 0.13±0.06 after the completion of the three-step procedure, respectively. Mean spherical equivalent refraction was significantly reduced (P<.05) from -5.66±5.63 diopters (D) preoperatively to -0.98±2.21 D after the three-step procedure. Mean steep and flat keratometry values were significantly reduced (P<.05) from 54.65±5.80 D and 47.80±3.97 D preoperatively to 45.99±3.12 D and 44.69±3.19 D after the three-step procedure, respectively. Combined topography-guided transepithelial PRK with intracorneal ring segments implantation and CXL in a three-step procedure seems to be an effective, promising treatment sequence offering patients a functional visual acuity and ceasing progression of the ectatic disorder. A longer follow-up and larger case series are necessary to thoroughly evaluate safety, stability, and efficacy of this innovative procedure. Copyright 2013, SLACK Incorporated.
NASA Astrophysics Data System (ADS)
Hamraz, Hamid; Contreras, Marco A.; Zhang, Jun
2017-08-01
Airborne LiDAR point cloud representing a forest contains 3D data, from which vertical stand structure even of understory layers can be derived. This paper presents a tree segmentation approach for multi-story stands that stratifies the point cloud to canopy layers and segments individual tree crowns within each layer using a digital surface model based tree segmentation method. The novelty of the approach is the stratification procedure that separates the point cloud to an overstory and multiple understory tree canopy layers by analyzing vertical distributions of LiDAR points within overlapping locales. The procedure does not make a priori assumptions about the shape and size of the tree crowns and can, independent of the tree segmentation method, be utilized to vertically stratify tree crowns of forest canopies. We applied the proposed approach to the University of Kentucky Robinson Forest - a natural deciduous forest with complex and highly variable terrain and vegetation structure. The segmentation results showed that using the stratification procedure strongly improved detecting understory trees (from 46% to 68%) at the cost of introducing a fair number of over-segmented understory trees (increased from 1% to 16%), while barely affecting the overall segmentation quality of overstory trees. Results of vertical stratification of the canopy showed that the point density of understory canopy layers were suboptimal for performing a reasonable tree segmentation, suggesting that acquiring denser LiDAR point clouds would allow more improvements in segmenting understory trees. As shown by inspecting correlations of the results with forest structure, the segmentation approach is applicable to a variety of forest types.
Monitoring Statistics Which Have Increased Power over a Reduced Time Range.
ERIC Educational Resources Information Center
Tang, S. M.; MacNeill, I. B.
1992-01-01
The problem of monitoring trends for changes at unknown times is considered. Statistics that permit one to focus high power on a segment of the monitored period are studied. Numerical procedures are developed to compute the null distribution of these statistics. (Author)
NASA Technical Reports Server (NTRS)
Koppen, Sandra V.; Nguyen, Truong X.; Mielnik, John J.
2010-01-01
The NASA Langley Research Center's High Intensity Radiated Fields Laboratory has developed a capability based on the RTCA/DO-160F Section 20 guidelines for radiated electromagnetic susceptibility testing in reverberation chambers. Phase 1 of the test procedure utilizes mode-tuned stirrer techniques and E-field probe measurements to validate chamber uniformity, determines chamber loading effects, and defines a radiated susceptibility test process. The test procedure is segmented into numbered operations that are largely software controlled. This document is intended as a laboratory test reference and includes diagrams of test setups, equipment lists, as well as test results and analysis. Phase 2 of development is discussed.
Computer Aided Segmentation Analysis: New Software for College Admissions Marketing.
ERIC Educational Resources Information Center
Lay, Robert S.; Maguire, John J.
1983-01-01
Compares segmentation solutions obtained using a binary segmentation algorithm (THAID) and a new chi-square-based procedure (CHAID) that segments the prospective pool of college applicants using application and matriculation as criteria. Results showed a higher number of estimated qualified inquiries and more accurate estimates with CHAID. (JAC)
Statistical Segmentation of Surgical Instruments in 3D Ultrasound Images
Linguraru, Marius George; Vasilyev, Nikolay V.; Del Nido, Pedro J.; Howe, Robert D.
2008-01-01
The recent development of real-time 3D ultrasound enables intracardiac beating heart procedures, but the distorted appearance of surgical instruments is a major challenge to surgeons. In addition, tissue and instruments have similar gray levels in US images and the interface between instruments and tissue is poorly defined. We present an algorithm that automatically estimates instrument location in intracardiac procedures. Expert-segmented images are used to initialize the statistical distributions of blood, tissue and instruments. Voxels are labeled through an iterative expectation-maximization algorithm using information from the neighboring voxels through a smoothing kernel. Once the three classes of voxels are separated, additional neighboring information is combined with the known shape characteristics of instruments in order to correct for misclassifications. We analyze the major axis of segmented data through their principal components and refine the results by a watershed transform, which corrects the results at the contact between instrument and tissue. We present results on 3D in-vitro data from a tank trial, and 3D in-vivo data from cardiac interventions on porcine beating hearts, using instruments of four types of materials. The comparison of algorithm results to expert-annotated images shows the correct segmentation and position of the instrument shaft. PMID:17521802
NASA Technical Reports Server (NTRS)
Hague, D. S.; Rozendaal, H. L.
1977-01-01
A rapid mission analysis code based on the use of approximate flight path equations of motion is presented. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed characteristics were specified in tabular form. The code also contains extensive flight envelope performance mapping capabilities. Approximate take off and landing analyses were performed. At high speeds, centrifugal lift effects were accounted for. Extensive turbojet and ramjet engine scaling procedures were incorporated in the code.
NASA Technical Reports Server (NTRS)
Bergan, Andrew C.; Garcea, Serafina C.
2017-01-01
The role of longitudinal compressive failure mechanisms in notched cross-ply laminates is studied experimentally with in-situ synchrotron radiation based computed tomography. Carbon/epoxy specimens loaded monotonically in uniaxial compression exhibited a quasi-stable failure process, which was captured with computed tomography scans recorded continuously with a temporal resolutions of 2.4 seconds and a spatial resolution of 1.1 microns per voxel. A detailed chronology of the initiation and propagation of longitudinal matrix splitting cracks, in-plane and out-of-plane kink bands, shear-driven fiber failure, delamination, and transverse matrix cracks is provided with a focus on kink bands as the dominant failure mechanism. An automatic segmentation procedure is developed to identify the boundary surfaces of a kink band. The segmentation procedure enables 3-dimensional visualization of the kink band and conveys the orientation, inclination, and spatial variation of the kink band. The kink band inclination and length are examined using the segmented data revealing tunneling and spatial variations not apparent from studying the 2-dimensional section data.
Baca, A
1996-04-01
A method has been developed for the precise determination of anthropometric dimensions from the video images of four different body configurations. High precision is achieved by incorporating techniques for finding the location of object boundaries with sub-pixel accuracy, the implementation of calibration algorithms, and by taking into account the varying distances of the body segments from the recording camera. The system allows automatic segment boundary identification from the video image, if the boundaries are marked on the subject by black ribbons. In connection with the mathematical finite-mass-element segment model of Hatze, body segment parameters (volumes, masses, the three principal moments of inertia, the three local coordinates of the segmental mass centers etc.) can be computed by using the anthropometric data determined videometrically as input data. Compared to other, recently published video-based systems for the estimation of the inertial properties of body segments, the present algorithms reduce errors originating from optical distortions, inaccurate edge-detection procedures, and user-specified upper and lower segment boundaries or threshold levels for the edge-detection. The video-based estimation of human body segment parameters is especially useful in situations where ease of application and rapid availability of comparatively precise parameter values are of importance.
Segmentation of lung fields using Chan-Vese active contour model in chest radiographs
NASA Astrophysics Data System (ADS)
Sohn, Kiwon
2011-03-01
A CAD tool for chest radiographs consists of several procedures and the very first step is segmentation of lung fields. We develop a novel methodology for segmentation of lung fields in chest radiographs that can satisfy the following two requirements. First, we aim to develop a segmentation method that does not need a training stage with manual estimation of anatomical features in a large training dataset of images. Secondly, for the ease of implementation, it is desirable to apply a well established model that is widely used for various image-partitioning practices. The Chan-Vese active contour model, which is based on Mumford-Shah functional in the level set framework, is applied for segmentation of lung fields. With the use of this model, segmentation of lung fields can be carried out without detailed prior knowledge on the radiographic anatomy of the chest, yet in some chest radiographs, the trachea regions are unfavorably segmented out in addition to the lung field contours. To eliminate artifacts from the trachea, we locate the upper end of the trachea, find a vertical center line of the trachea and delineate it, and then brighten the trachea region to make it less distinctive. The segmentation process is finalized by subsequent morphological operations. We randomly select 30 images from the Japanese Society of Radiological Technology image database to test the proposed methodology and the results are shown. We hope our segmentation technique can help to promote of CAD tools, especially for emerging chest radiographic imaging techniques such as dual energy radiography and chest tomosynthesis.
Segmentation by fusion of histogram-based k-means clusters in different color spaces.
Mignotte, Max
2008-05-01
This paper presents a new, simple, and efficient segmentation approach, based on a fusion procedure which aims at combining several segmentation maps associated to simpler partition models in order to finally get a more reliable and accurate segmentation result. The different label fields to be fused in our application are given by the same and simple (K-means based) clustering technique on an input image expressed in different color spaces. Our fusion strategy aims at combining these segmentation maps with a final clustering procedure using as input features, the local histogram of the class labels, previously estimated and associated to each site and for all these initial partitions. This fusion framework remains simple to implement, fast, general enough to be applied to various computer vision applications (e.g., motion detection and segmentation), and has been successfully applied on the Berkeley image database. The experiments herein reported in this paper illustrate the potential of this approach compared to the state-of-the-art segmentation methods recently proposed in the literature.
Hart, Nicolas H.; Nimphius, Sophia; Spiteri, Tania; Cochrane, Jodie L.; Newton, Robert U.
2015-01-01
Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and reliably determine body composition components and bone mass characteristics in-vivo. Presently, whole body scan models separate the body into axial and appendicular regions, however there is a need for localised appendicular segmentation models to further examine regions of interest within the upper and lower extremities. Similarly, inconsistencies pertaining to patient positioning exist in the literature which influence measurement precision and analysis outcomes highlighting a need for standardised procedure. This paper provides standardised and reproducible: 1) positioning and analysis procedures using DXA and 2) reliable segmental examinations through descriptive appendicular boundaries. Whole-body scans were performed on forty-six (n = 46) football athletes (age: 22.9 ± 4.3 yrs; height: 1.85 ± 0.07 cm; weight: 87.4 ± 10.3 kg; body fat: 11.4 ± 4.5 %) using DXA. All segments across all scans were analysed three times by the main investigator on three separate days, and by three independent investigators a week following the original analysis. To examine intra-rater and inter-rater, between day and researcher reliability, coefficients of variation (CV) and intraclass correlation coefficients (ICC) were determined. Positioning and segmental analysis procedures presented in this study produced very high, nearly perfect intra-tester (CV ≤ 2.0%; ICC ≥ 0.988) and inter-tester (CV ≤ 2.4%; ICC ≥ 0.980) reliability, demonstrating excellent reproducibility within and between practitioners. Standardised examinations of axial and appendicular segments are necessary. Future studies aiming to quantify and report segmental analyses of the upper- and lower-body musculoskeletal properties using whole-body DXA scans are encouraged to use the patient positioning and image analysis procedures outlined in this paper. Key points Musculoskeletal examinations using DXA technology require highly standardised and reproducible patient positioning and image analysis procedures to accurately measure and monitor axial, appendicular and segmental regions of interest. Internal rotation and fixation of the lower-limbs is strongly recommended during whole-body DXA scans to prevent undesired movement, improve frontal mass accessibility and enhance ankle joint visibility during scan performance and analysis. Appendicular segmental analyses using whole-body DXA scans are highly reliable for all regional upper-body and lower-body segmentations, with hard-tissue (CV ≤ 1.5%; R ≥ 0.990) achieving greater reliability and lower error than soft-tissue (CV ≤ 2.4%; R ≥ 0.980) masses when using our appendicular segmental boundaries. PMID:26336349
A Modular Hierarchical Approach to 3D Electron Microscopy Image Segmentation
Liu, Ting; Jones, Cory; Seyedhosseini, Mojtaba; Tasdizen, Tolga
2014-01-01
The study of neural circuit reconstruction, i.e., connectomics, is a challenging problem in neuroscience. Automated and semi-automated electron microscopy (EM) image analysis can be tremendously helpful for connectomics research. In this paper, we propose a fully automatic approach for intra-section segmentation and inter-section reconstruction of neurons using EM images. A hierarchical merge tree structure is built to represent multiple region hypotheses and supervised classification techniques are used to evaluate their potentials, based on which we resolve the merge tree with consistency constraints to acquire final intra-section segmentation. Then, we use a supervised learning based linking procedure for the inter-section neuron reconstruction. Also, we develop a semi-automatic method that utilizes the intermediate outputs of our automatic algorithm and achieves intra-segmentation with minimal user intervention. The experimental results show that our automatic method can achieve close-to-human intra-segmentation accuracy and state-of-the-art inter-section reconstruction accuracy. We also show that our semi-automatic method can further improve the intra-segmentation accuracy. PMID:24491638
Automated MRI segmentation for individualized modeling of current flow in the human head.
Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C
2013-12-01
High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.
Assessing the Robustness of Complete Bacterial Genome Segmentations
NASA Astrophysics Data System (ADS)
Devillers, Hugo; Chiapello, Hélène; Schbath, Sophie; El Karoui, Meriem
Comparison of closely related bacterial genomes has revealed the presence of highly conserved sequences forming a "backbone" that is interrupted by numerous, less conserved, DNA fragments. Segmentation of bacterial genomes into backbone and variable regions is particularly useful to investigate bacterial genome evolution. Several software tools have been designed to compare complete bacterial chromosomes and a few online databases store pre-computed genome comparisons. However, very few statistical methods are available to evaluate the reliability of these software tools and to compare the results obtained with them. To fill this gap, we have developed two local scores to measure the robustness of bacterial genome segmentations. Our method uses a simulation procedure based on random perturbations of the compared genomes. The scores presented in this paper are simple to implement and our results show that they allow to discriminate easily between robust and non-robust bacterial genome segmentations when using aligners such as MAUVE and MGA.
Bonding Thin Mirror Segments Without Distortion for the International X-Ray Observatory
NASA Technical Reports Server (NTRS)
Evans, Tyler C.; Chan, Kai-Wing; Saha, Timo T.
2011-01-01
The International X-Ray Observatory (IXO) uses thin glass optics to maximize large effective area and precise low angular resolution. The thin glass mirror segments must be transferred from their fabricated state to a permanent structure without imparting distortion. IXO will incorporate about fourteen thousand thin mirror segments to achieve the mission goal of 3.0 square meters of effective area at 1.25 keV with an angular resolution of five arcseconds. To preserve figure and alignment, the mirror segment must be bonded with sub-micron movement at each corner. Recent advances in technology development have produced significant x-ray test results of a bonded pair of mirrors. Three specific bonding cycles will be described highlighting the improvements in procedure, temperature control, and precision bonding. This paper will highlight the recent advances in alignment and permanent bonding as well as the results they have produced.
Machine learning in soil classification.
Bhattacharya, B; Solomatine, D P
2006-03-01
In a number of engineering problems, e.g. in geotechnics, petroleum engineering, etc. intervals of measured series data (signals) are to be attributed a class maintaining the constraint of contiguity and standard classification methods could be inadequate. Classification in this case needs involvement of an expert who observes the magnitude and trends of the signals in addition to any a priori information that might be available. In this paper, an approach for automating this classification procedure is presented. Firstly, a segmentation algorithm is developed and applied to segment the measured signals. Secondly, the salient features of these segments are extracted using boundary energy method. Based on the measured data and extracted features to assign classes to the segments classifiers are built; they employ Decision Trees, ANN and Support Vector Machines. The methodology was tested in classifying sub-surface soil using measured data from Cone Penetration Testing and satisfactory results were obtained.
Tingelhoff, K; Moral, A I; Kunkel, M E; Rilk, M; Wagner, I; Eichhorn, K G; Wahl, F M; Bootz, F
2007-01-01
Segmentation of medical image data is getting more and more important over the last years. The results are used for diagnosis, surgical planning or workspace definition of robot-assisted systems. The purpose of this paper is to find out whether manual or semi-automatic segmentation is adequate for ENT surgical workflow or whether fully automatic segmentation of paranasal sinuses and nasal cavity is needed. We present a comparison of manual and semi-automatic segmentation of paranasal sinuses and the nasal cavity. Manual segmentation is performed by custom software whereas semi-automatic segmentation is realized by a commercial product (Amira). For this study we used a CT dataset of the paranasal sinuses which consists of 98 transversal slices, each 1.0 mm thick, with a resolution of 512 x 512 pixels. For the analysis of both segmentation procedures we used volume, extension (width, length and height), segmentation time and 3D-reconstruction. The segmentation time was reduced from 960 minutes with manual to 215 minutes with semi-automatic segmentation. We found highest variances segmenting nasal cavity. For the paranasal sinuses manual and semi-automatic volume differences are not significant. Dependent on the segmentation accuracy both approaches deliver useful results and could be used for e.g. robot-assisted systems. Nevertheless both procedures are not useful for everyday surgical workflow, because they take too much time. Fully automatic and reproducible segmentation algorithms are needed for segmentation of paranasal sinuses and nasal cavity.
Review of manual control methods for handheld maneuverable instruments.
Fan, Chunman; Dodou, Dimitra; Breedveld, Paul
2013-06-01
By the introduction of new technologies, surgical procedures have been varying from free access in open surgery towards limited access in minimal access surgery. Improving access to difficult-to-reach anatomic sites, e.g. in neurosurgery or percutaneous interventions, needs advanced maneuverable instrumentation. Advances in maneuverable technology require the development of dedicated methods enabling surgeons to stay in direct, manual control of these complex instruments. This article gives an overview of the state-of-the-art in the development of manual control methods for handheld maneuverable instruments. It categorizes the manual control methods in three levels: a) number of steerable segments, b) number of Degrees Of Freedom (DOF), and c) coupling between control motion of the handle and steering motion of the tip. The literature research was completed by using Web of Science, Scopus and PubMed. The study shows that in controlling single steerable segments, direct as well as indirect control methods have been developed, whereas in controlling multiple steerable segments, a gradual shift can be noticed from parallel and serial control to integrated control. The development of multi-segmented maneuverable instruments is still at an early stage, and an intuitive and effective method to control them has to become a primary focus in the domain of minimal access surgery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brun, E., E-mail: emmanuel.brun@esrf.fr; Grandl, S.; Sztrókay-Gaul, A.
Purpose: Phase contrast computed tomography has emerged as an imaging method, which is able to outperform present day clinical mammography in breast tumor visualization while maintaining an equivalent average dose. To this day, no segmentation technique takes into account the specificity of the phase contrast signal. In this study, the authors propose a new mathematical framework for human-guided breast tumor segmentation. This method has been applied to high-resolution images of excised human organs, each of several gigabytes. Methods: The authors present a segmentation procedure based on the viscous watershed transform and demonstrate the efficacy of this method on analyzer basedmore » phase contrast images. The segmentation of tumors inside two full human breasts is then shown as an example of this procedure’s possible applications. Results: A correct and precise identification of the tumor boundaries was obtained and confirmed by manual contouring performed independently by four experienced radiologists. Conclusions: The authors demonstrate that applying the watershed viscous transform allows them to perform the segmentation of tumors in high-resolution x-ray analyzer based phase contrast breast computed tomography images. Combining the additional information provided by the segmentation procedure with the already high definition of morphological details and tissue boundaries offered by phase contrast imaging techniques, will represent a valuable multistep procedure to be used in future medical diagnostic applications.« less
Drobinski, G; Thomas, D; Funck, F; Metzger, J P; Canny, M; Grosgogeat, Y
1986-08-01
Certain surgical techniques may make it difficult to catheterize the coronary ostia and perform percutaneous coronary angioplasty. We report the case of a 48 year old patient who developed unstable angina four years after a Bentall's procedure with reimplantation of the coronary arteries on a Dacron coronary prosthesis. The anginal pain was related to very severe stenosis of the proximal segment of the left anterior descending artery. The difficulties encountered during the dilatation procedure were due to: (a) the ectopic position of the ostium of the prosthesis on the anterior aortic wall; (b) the forces exerted on the aortic prosthesis wall and on the valvular prosthesis during positioning of the guiding catheter which were poorly tolerated and induced a vagal reaction; (c) the direction taken by the distal tip of the guiding catheter, perpendicular to the wall of the aortic prosthesis; (d) the sinuosity of the arterial trajectory: the left coronary segment of the coronary prosthesis was directed towards the left circumflex artery rather than towards the left anterior descending artery. Coronary angioplasty succeeded after relatively complex technical procedures: special guiding catheter, unusual intra-aortic manoeuvres for positioning the guiding catheter, dilatation catheter change on a 3-metre long guide wire in order to cross the stenotic segment; this was performed with a super low-profiled dilatation catheter. There were no complications and anginal pain disappeared.
NASA Technical Reports Server (NTRS)
Sielken, R. L., Jr. (Principal Investigator)
1981-01-01
Several methods of estimating individual crop acreages using a mixture of completely identified and partially identified (generic) segments from a single growing year are derived and discussed. A small Monte Carlo study of eight estimators is presented. The relative empirical behavior of these estimators is discussed as are the effects of segment sample size and amount of partial identification. The principle recommendations are (1) to not exclude, but rather incorporate partially identified sample segments into the estimation procedure, (2) try to avoid having a large percentage (say 80%) of only partially identified segments, in the sample, and (3) use the maximum likelihood estimator although the weighted least squares estimator and least squares ratio estimator both perform almost as well. Sets of spring small grains (North Dakota) data were used.
W. M. Keck Observatory primary mirror segment repair project: overview and status
NASA Astrophysics Data System (ADS)
Meeks, Robert L.; Doyle, Steve; Higginson, Jamie; Hudek, John S.; Irace, William; McBride, Dennis; Pollard, Mike; Tai, Kuochou; Von Boeckmann, Tod; Wold, Leslie; Wold, Truman
2016-07-01
The W. M. Keck Observatory Segment Repair Project is repairing stress-induced fractures near the support points in the primary mirror segments. The cracks are believed to result from deficiencies in the original design and implementation of the adhesive joints connecting the Invar support components to the ZERODUR mirror. Stresses caused by temperature cycling over 20 years of service drove cracks that developed at the glass-metal interfaces. Over the last few years the extent and cause of the cracks have been studied, and new supports have been designed. Repair of the damaged glass required development of specialized tools and procedures for: (1) transport of the segments; (2) pre-repair metrology to establish the initial condition; (3) removal of support hardware assemblies; (4) removal of the original supports; (5) grinding and re-surfacing the damaged glass areas; (6) etching to remove sub-surface damage; (7) bonding new supports; (8) re-installation of support assemblies; and (9) post-repair metrology. Repair of the first segment demonstrated the new tools and processes. On-sky measurements before and after repair verified compliance with the requirements. This paper summarizes the repair process, on-sky results, and transportation system, and also provides an update on the project status and schedule for repairing all 84 mirror segments. Strategies for maintaining quality and ensuring that repairs are done consistently are also presented.
Segmenting patients and physicians using preferences from discrete choice experiments.
Deal, Ken
2014-01-01
People often form groups or segments that have similar interests and needs and seek similar benefits from health providers. Health organizations need to understand whether the same health treatments, prevention programs, services, and products should be applied to everyone in the relevant population or whether different treatments need to be provided to each of several segments that are relatively homogeneous internally but heterogeneous among segments. Our objective was to explain the purposes, benefits, and methods of segmentation for health organizations, and to illustrate the process of segmenting health populations based on preference coefficients from a discrete choice conjoint experiment (DCE) using an example study of prevention of cyberbullying among university students. We followed a two-level procedure for investigating segmentation incorporating several methods for forming segments in Level 1 using DCE preference coefficients and testing their quality, reproducibility, and usability by health decision makers. Covariates (demographic, behavioral, lifestyle, and health state variables) were included in Level 2 to further evaluate quality and to support the scoring of large databases and developing typing tools for assigning those in the relevant population, but not in the sample, to the segments. Several segmentation solution candidates were found during the Level 1 analysis, and the relationship of the preference coefficients to the segments was investigated using predictive methods. Those segmentations were tested for their quality and reproducibility and three were found to be very close in quality. While one seemed better than others in the Level 1 analysis, another was very similar in quality and proved ultimately better in predicting segment membership using covariates in Level 2. The two segments in the final solution were profiled for attributes that would support the development and acceptance of cyberbullying prevention programs among university students. Those segments were very different-where one wanted substantial penalties against cyberbullies and were willing to devote time to a prevention program, while the other felt no need to be involved in prevention and wanted only minor penalties. Segmentation recognizes key differences in why patients and physicians prefer different health programs and treatments. A viable segmentation solution may lead to adapting prevention programs and treatments for each targeted segment and/or to educating and communicating to better inform those in each segment of the program/treatment benefits. Segment members' revealed preferences showing behavioral changes provide the ultimate basis for evaluating the segmentation benefits to the health organization.
NASA Technical Reports Server (NTRS)
Lewis, John F.; Cole, Harold; Cronin, Gary; Gazda, Daniel B.; Steele, John
2006-01-01
Following the Colombia accident, the Extravehicular Mobility Units (EMU) onboard ISS were unused for several months. Upon startup, the units experienced a failure in the coolant system. This failure resulted in the loss of Extravehicular Activity (EVA) capability from the US segment of ISS. With limited on-orbit evidence, a team of chemists, engineers, metallurgists, and microbiologists were able to identify the cause of the failure and develop recovery hardware and procedures. As a result of this work, the ISS crew regained the capability to perform EVAs from the US segment of the ISS.
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Udupa, Jayaram K.; Moonis, Gul; Schwartz, Eric; Balcer, Laura
2005-04-01
Based on Fuzzy Connectedness (FC) object delineation principles and algorithms, a hierarchical brain tissue segmentation technique has been developed for MR images. After MR image background intensity inhomogeneity correction and intensity standardization, three FC objects for cerebrospinal fluid (CSF), gray matter (GM), and white matter (WM) are generated via FC object delineation, and an intracranial (IC) mask is created via morphological operations. Then, the IC mask is decomposed into parenchymal (BP) and CSF masks, while the BP mask is separated into WM and GM masks. WM mask is further divided into pure and dirty white matter masks (PWM and DWM). In Multiple Sclerosis studies, a severe white matter lesion (LS) mask is defined from DWM mask. Based on the segmented brain tissue images, a histogram-based method has been developed to find disease-specific, image-based quantitative markers for characterizing the macromolecular manifestation of the two diseases. These same procedures have been applied to 65 MS (46 patients and 19 normal subjects) and 25 AD (15 patients and 10 normal subjects) data sets, each of which consists of FSE PD- and T2-weighted MR images. Histograms representing standardized PD and T2 intensity distributions and their numerical parameters provide an effective means for characterizing the two diseases. The procedures are systematic, nearly automated, robust, and the results are reproducible.
A volumetric pulmonary CT segmentation method with applications in emphysema assessment
NASA Astrophysics Data System (ADS)
Silva, José Silvestre; Silva, Augusto; Santos, Beatriz S.
2006-03-01
A segmentation method is a mandatory pre-processing step in many automated or semi-automated analysis tasks such as region identification and densitometric analysis, or even for 3D visualization purposes. In this work we present a fully automated volumetric pulmonary segmentation algorithm based on intensity discrimination and morphologic procedures. Our method first identifies the trachea as well as primary bronchi and then the pulmonary region is identified by applying a threshold and morphologic operations. When both lungs are in contact, additional procedures are performed to obtain two separated lung volumes. To evaluate the performance of the method, we compared contours extracted from 3D lung surfaces with reference contours, using several figures of merit. Results show that the worst case generally occurs at the middle sections of high resolution CT exams, due the presence of aerial and vascular structures. Nevertheless, the average error is inferior to the average error associated with radiologist inter-observer variability, which suggests that our method produces lung contours similar to those drawn by radiologists. The information created by our segmentation algorithm is used by an identification and representation method in pulmonary emphysema that also classifies emphysema according to its severity degree. Two clinically proved thresholds are applied which identify regions with severe emphysema, and with highly severe emphysema. Based on this thresholding strategy, an application for volumetric emphysema assessment was developed offering new display paradigms concerning the visualization of classification results. This framework is easily extendable to accommodate other classifiers namely those related with texture based segmentation as it is often the case with interstitial diseases.
The virtual craniofacial patient: 3D jaw modeling and animation.
Enciso, Reyes; Memon, Ahmed; Fidaleo, Douglas A; Neumann, Ulrich; Mah, James
2003-01-01
In this paper, we present new developments in the area of 3D human jaw modeling and animation. CT (Computed Tomography) scans have traditionally been used to evaluate patients with dental implants, assess tumors, cysts, fractures and surgical procedures. More recently this data has been utilized to generate models. Researchers have reported semi-automatic techniques to segment and model the human jaw from CT images and manually segment the jaw from MRI images. Recently opto-electronic and ultrasonic-based systems (JMA from Zebris) have been developed to record mandibular position and movement. In this research project we introduce: (1) automatic patient-specific three-dimensional jaw modeling from CT data and (2) three-dimensional jaw motion simulation using jaw tracking data from the JMA system (Zebris).
Management of Long-Segment and Panurethral Stricture Disease.
Martins, Francisco E; Kulkarni, Sanjay B; Joshi, Pankaj; Warner, Jonathan; Martins, Natalia
2015-01-01
Long-segment urethral stricture or panurethral stricture disease, involving the different anatomic segments of anterior urethra, is a relatively less common lesion of the anterior urethra compared to bulbar stricture. However, it is a particularly difficult surgical challenge for the reconstructive urologist. The etiology varies according to age and geographic location, lichen sclerosus being the most prevalent in some regions of the globe. Other common and significant causes are previous endoscopic urethral manipulations (urethral catheterization, cystourethroscopy, and transurethral resection), previous urethral surgery, trauma, inflammation, and idiopathic. The iatrogenic causes are the most predominant in the Western or industrialized countries, and lichen sclerosus is the most common in India. Several surgical procedures and their modifications, including those performed in one or more stages and with the use of adjunct tissue transfer maneuvers, have been developed and used worldwide, with varying long-term success. A one-stage, minimally invasive technique approached through a single perineal incision has gained widespread popularity for its effectiveness and reproducibility. Nonetheless, for a successful result, the reconstructive urologist should be experienced and familiar with the different treatment modalities currently available and select the best procedure for the individual patient.
Mode calculations in unstable resonators with flowing saturable gain. 1:hermite-gaussian expansion.
Siegman, A E; Sziklas, E A
1974-12-01
We present a procedure for calculating the three-dimensional mode pattern, the output beam characteristics, and the power output of an oscillating high-power laser taking into account a nonuniform, transversely flowing, saturable gain medium; index inhomogeneities inside the laser resonator; and arbitrary mirror distortion and misalignment. The laser is divided into a number of axial segments. The saturated gain-and-index variation. across each short segment is lumped into a complex gain profile across the midplane of that segment. The circulating optical wave within the resonator is propagated from midplane to midplane in free-space fashion and is multiplied by the lumped complex gain profile upon passing through each midplane. After each complete round trip of the optical wave inside the resonator, the saturated gain profiles are recalculated based upon the circulating fields in the cavity. The procedure when applied to typical unstable-resonator flowing-gain lasers shows convergence to a single distorted steady-state mode of oscillation. Typical near-field and far-field results are presented. Several empirical rules of thumb for finite truncated Hermite-Gaussian expansions, including an approximate sampling theorem, have been developed as part of the calculations.
Criteria for Labelling Prosodic Aspects of English Speech.
ERIC Educational Resources Information Center
Bagshaw, Paul C.; Williams, Briony J.
A study reports a set of labelling criteria which have been developed to label prosodic events in clear, continuous speech, and proposes a scheme whereby this information can be transcribed in a machine readable format. A prosody in a syllabic domain which is synchronized with a phonemic segmentation was annotated. A procedural definition of…
A Course Offering in Jewish Law
ERIC Educational Resources Information Center
Dorff, Elliott N.; Rosett, Arthur L.
1976-01-01
Describes a course taught at the University of California, Los Angeles, School of Law. Content was divided into four segments: an introduction to the literature and a brief history of its development, court procedures, marriage and family law, and commercial law. Origins of the course and problems, especially that of comparing two legal systems…
NASA Technical Reports Server (NTRS)
Nalepka, R. F. (Principal Investigator); Malila, W. A.; Gleason, J. M.
1977-01-01
The author has identified the following significant results. LANDSAT data from seven 5 by 6 segments having crop type information were analyzed to determine the potential for spectral separation of spring wheat from other small grains as an alternative to the primary LACIE procedure for estimating spring wheat acreage. Within segment field-center, classification accuracies for spring wheat vs. barley tended to be best in mid-July when crop color changes were in progress. When correlations were made for differences in atmospheric haze, data from several segments could be aggregated, and results that approached within segment accuracies were obtained for selected dates. LACIE field measurement spectral reflectance data provided information on both wheat development patterns and the importance of various agronomic factors on wheat reflectance, the most important being availability of soil moisture. To investigate early season detection for winter wheat, reflectance of developing wheat patterns was simulated through reflectance modeling and was analyzed along with field measured reflectance from a Kansas site. The green component development of the wheat field was analyzed as a function of data throughout the season. A selected threshold was not crossed by all fields until mid-April. These reflectance data were shown to be consistent actual LANDSAT data.
Semi-automated brain tumor and edema segmentation using MRI.
Xie, Kai; Yang, Jie; Zhang, Z G; Zhu, Y M
2005-10-01
Manual segmentation of brain tumors from magnetic resonance images is a challenging and time-consuming task. A semi-automated method has been developed for brain tumor and edema segmentation that will provide objective, reproducible segmentations that are close to the manual results. Additionally, the method segments non-enhancing brain tumor and edema from healthy tissues in magnetic resonance images. In this study, a semi-automated method was developed for brain tumor and edema segmentation and volume measurement using magnetic resonance imaging (MRI). Some novel algorithms for tumor segmentation from MRI were integrated in this medical diagnosis system. We exploit a hybrid level set (HLS) segmentation method driven by region and boundary information simultaneously, region information serves as a propagation force which is robust and boundary information serves as a stopping functional which is accurate. Ten different patients with brain tumors of different size, shape and location were selected, a total of 246 axial tumor-containing slices obtained from 10 patients were used to evaluate the effectiveness of segmentation methods. This method was applied to 10 non-enhancing brain tumors and satisfactory results were achieved. Two quantitative measures for tumor segmentation quality estimation, namely, correspondence ratio (CR) and percent matching (PM), were performed. For the segmentation of brain tumor, the volume total PM varies from 79.12 to 93.25% with the mean of 85.67+/-4.38% while the volume total CR varies from 0.74 to 0.91 with the mean of 0.84+/-0.07. For the segmentation of edema, the volume total PM varies from 72.86 to 87.29% with the mean of 79.54+/-4.18% while the volume total CR varies from 0.69 to 0.85 with the mean of 0.79+/-0.08. The HLS segmentation method perform better than the classical level sets (LS) segmentation method in PM and CR. The results of this research may have potential applications, both as a staging procedure and a method of evaluating tumor response during treatment, this method can be used as a clinical image analysis tool for doctors or radiologists.
Chen, Hung-chi; Kuo, Hsin-chih; Chung, Kuo-Piao; Chang, Sophia; Su, Syi; Yang, Ming-chin
2010-01-01
Medical tourism is a new trend in medical service. It is booming not only in Asian countries but also in European and South American countries. Worldwide competition of medical service is expected in the future, and niche service will be a "trademark" for the promotion of global medicine. Niche service also functions for market segmentation. Niche services are usually surgical procedures. A study was carried out to compare different strategies for developing medical tourism in Asian countries. The role of a niche service is evaluated in the initiation and further development of medical tourism for individual countries. From this study, a general classification was proposed in terms of treatment procedures. It can be used as a useful guideline for additional studies in medical tourism. Niche service plays the following roles in the development of medical tourism: (1) It attracts attention in the mass media and helps in subsequent promotion of business, (2) it exerts pressure on the hospital, which must improve the quality of health care provided in treating foreign patients, especially the niche services, and (3) it is a tool for setting up the business model. E-Da Hospital is an example for developing medical tourism in Taiwan. A side effect is that niche service brings additional foreign patients, which will contribute to the benefit of the hospital, but this leaves less room for treating domestic patients. A niche service is a means of introduction for entry into the market of medical tourism. How to create a successful story is important for the development of a niche service. When a good reputation has been established, the information provided on the Internet can last for a long time and can spread internationally to form a distinguished mark for further development. Niche services can be classified into 3 categories: (1) Low-risk procedures with large price differences and long stay after retirement; (2) high-risk procedures with less of a price difference, and (3) banned procedures that are not allowed legally in home countries of foreign patients, such as stem cell therapy. In establishing a niche service, a high-quality, nonmedical segment should be integrated as well.
Colony image acquisition and segmentation
NASA Astrophysics Data System (ADS)
Wang, W. X.
2007-12-01
For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems. The main problems are image acquisition and image segmentation. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. The developed colony image segmentation algorithm consists of the sub-algorithms: (1) image classification; (2) image processing; and (3) colony delineation. The colony delineation algorithm main contain: the procedures based on grey level similarity, on boundary tracing, on shape information and colony excluding. In addition, a number of algorithms are developed for colony analysis. The system has been tested and satisfactory.
Twelve automated thresholding methods for segmentation of PET images: a phantom study.
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M
2012-06-21
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Twelve automated thresholding methods for segmentation of PET images: a phantom study
NASA Astrophysics Data System (ADS)
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.
2012-06-01
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Segmenting Student Markets with a Student Satisfaction and Priorities Survey.
ERIC Educational Resources Information Center
Borden, Victor M. H.
1995-01-01
A market segmentation analysis of 872 university students compared 2 hierarchical clustering procedures for deriving market segments: 1 using matching-type measures and an agglomerative clustering algorithm, and 1 using the chi-square based automatic interaction detection. Results and implications for planning, evaluating, and improving academic…
NASA Astrophysics Data System (ADS)
Wasserthal, Christian; Engel, Karin; Rink, Karsten; Brechmann, Andr'e.
We propose an automatic procedure for the correct segmentation of grey and white matter in MR data sets of the human brain. Our method exploits general anatomical knowledge for the initial segmentation and for the subsequent refinement of the estimation of the cortical grey matter. Our results are comparable to manual segmentations.
Wooten, H. Omar; Green, Olga; Li, Harold H.; Liu, Shi; Li, Xiaoling; Rodriguez, Vivian; Mutic, Sasa; Kashani, Rojano
2016-01-01
The aims of this study were to develop a method for automatic and immediate verification of treatment delivery after each treatment fraction in order to detect and correct errors, and to develop a comprehensive daily report which includes delivery verification results, daily image‐guided radiation therapy (IGRT) review, and information for weekly physics reviews. After systematically analyzing the requirements for treatment delivery verification and understanding the available information from a commercial MRI‐guided radiotherapy treatment machine, we designed a procedure to use 1) treatment plan files, 2) delivery log files, and 3) beam output information to verify the accuracy and completeness of each daily treatment delivery. The procedure verifies the correctness of delivered treatment plan parameters including beams, beam segments and, for each segment, the beam‐on time and MLC leaf positions. For each beam, composite primary fluence maps are calculated from the MLC leaf positions and segment beam‐on time. Error statistics are calculated on the fluence difference maps between the plan and the delivery. A daily treatment delivery report is designed to include all required information for IGRT and weekly physics reviews including the plan and treatment fraction information, daily beam output information, and the treatment delivery verification results. A computer program was developed to implement the proposed procedure of the automatic delivery verification and daily report generation for an MRI guided radiation therapy system. The program was clinically commissioned. Sensitivity was measured with simulated errors. The final version has been integrated into the commercial version of the treatment delivery system. The method automatically verifies the EBRT treatment deliveries and generates the daily treatment reports. Already in clinical use for over one year, it is useful to facilitate delivery error detection, and to expedite physician daily IGRT review and physicist weekly chart review. PACS number(s): 87.55.km PMID:27167269
NASA Astrophysics Data System (ADS)
Li, Shouju; Shangguan, Zichang; Cao, Lijuan
A procedure based on FEM is proposed to simulate interaction between concrete segments of tunnel linings and soils. The beam element named as Beam 3 in ANSYS software was used to simulate segments. The ground loss induced from shield tunneling and segment installing processes is simulated in finite element analysis. The distributions of bending moment, axial force and shear force on segments were computed by FEM. The commutated internal forces on segments will be used to design reinforced bars on shield linings. Numerically simulated ground settlements agree with observed values.
Márquez Neila, Pablo; Baumela, Luis; González-Soriano, Juncal; Rodríguez, Jose-Rodrigo; DeFelipe, Javier; Merchán-Pérez, Ángel
2016-04-01
Recent electron microscopy (EM) imaging techniques permit the automatic acquisition of a large number of serial sections from brain samples. Manual segmentation of these images is tedious, time-consuming and requires a high degree of user expertise. Therefore, there is considerable interest in developing automatic segmentation methods. However, currently available methods are computationally demanding in terms of computer time and memory usage, and to work properly many of them require image stacks to be isotropic, that is, voxels must have the same size in the X, Y and Z axes. We present a method that works with anisotropic voxels and that is computationally efficient allowing the segmentation of large image stacks. Our approach involves anisotropy-aware regularization via conditional random field inference and surface smoothing techniques to improve the segmentation and visualization. We have focused on the segmentation of mitochondria and synaptic junctions in EM stacks from the cerebral cortex, and have compared the results to those obtained by other methods. Our method is faster than other methods with similar segmentation results. Our image regularization procedure introduces high-level knowledge about the structure of labels. We have also reduced memory requirements with the introduction of energy optimization in overlapping partitions, which permits the regularization of very large image stacks. Finally, the surface smoothing step improves the appearance of three-dimensional renderings of the segmented volumes.
VirSSPA- a virtual reality tool for surgical planning workflow.
Suárez, C; Acha, B; Serrano, C; Parra, C; Gómez, T
2009-03-01
A virtual reality tool, called VirSSPA, was developed to optimize the planning of surgical processes. Segmentation algorithms for Computed Tomography (CT) images: a region growing procedure was used for soft tissues and a thresholding algorithm was implemented to segment bones. The algorithms operate semiautomati- cally since they only need seed selection with the mouse on each tissue segmented by the user. The novelty of the paper is the adaptation of an enhancement method based on histogram thresholding applied to CT images for surgical planning, which simplifies subsequent segmentation. A substantial improvement of the virtual reality tool VirSSPA was obtained with these algorithms. VirSSPA was used to optimize surgical planning, to decrease the time spent on surgical planning and to improve operative results. The success rate increases due to surgeons being able to see the exact extent of the patient's ailment. This tool can decrease operating room time, thus resulting in reduced costs. Virtual simulation was effective for optimizing surgical planning, which could, consequently, result in improved outcomes with reduced costs.
Segmenting hospitals for improved management strategy.
Malhotra, N K
1989-09-01
The author presents a conceptual framework for the a priori and clustering-based approaches to segmentation and evaluates them in the context of segmenting institutional health care markets. An empirical study is reported in which the hospital market is segmented on three state-of-being variables. The segmentation approach also takes into account important organizational decision-making variables. The sophisticated Thurstone Case V procedure is employed. Several marketing implications for hospitals, other health care organizations, hospital suppliers, and donor publics are identified.
Automated MRI Segmentation for Individualized Modeling of Current Flow in the Human Head
Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.
2013-01-01
Objective High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography (HD-EEG) require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images (MRI) requires labor-intensive manual segmentation, even when leveraging available automated segmentation tools. Also, accurate placement of many high-density electrodes on individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach A fully automated segmentation technique based on Statical Parametric Mapping 8 (SPM8), including an improved tissue probability map (TPM) and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on 4 healthy subjects and 7 stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. Main results The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view (FOV) extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Significance Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials. PMID:24099977
Automated MRI segmentation for individualized modeling of current flow in the human head
NASA Astrophysics Data System (ADS)
Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.
2013-12-01
Objective. High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets.Main results. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly.Significance. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.
Interactive experimenters' planning procedures and mission control
NASA Technical Reports Server (NTRS)
Desjardins, R. L.
1973-01-01
The computerized mission control and planning system routinely generates a 24-hour schedule in one hour of operator time by including time dimensions into experimental planning procedures. Planning is validated interactively as it is being generated segment by segment in the frame of specific event times. The planner simply points a light pen at the time mark of interest on the time line for entering specific event times into the schedule.
Laboratory Preparation in the Ocular Therapy Curriculum.
ERIC Educational Resources Information Center
Cummings, Roger W.
1986-01-01
Aspects of laboratory preparation necessary for undergraduate or graduate optometric training in the use of therapeutic drugs are discussed, including glaucoma therapy, anterior segment techniques, posterior segment, and systemic procedures. (MSE)
Uribe, Juan S; Myhre, Sue Lynn; Youssef, Jim A
2016-04-01
A literature review. The purpose of this study was to review lumbar segmental and regional alignment changes following treatment with a variety of minimally invasive surgery (MIS) interbody fusion procedures for short-segment, degenerative conditions. An increasing number of lumbar fusions are being performed with minimally invasive exposures, despite a perception that minimally invasive lumbar interbody fusion procedures are unable to affect segmental and regional lordosis. Through a MEDLINE and Google Scholar search, a total of 23 articles were identified that reported alignment following minimally invasive lumbar fusion for degenerative (nondeformity) lumbar spinal conditions to examine aggregate changes in postoperative alignment. Of the 23 studies identified, 28 study cohorts were included in the analysis. Procedural cohorts included MIS ALIF (two), extreme lateral interbody fusion (XLIF) (16), and MIS posterior/transforaminal lumbar interbody fusion (P/TLIF) (11). Across 19 study cohorts and 720 patients, weighted average of lumbar lordosis preoperatively for all procedures was 43.5° (range 28.4°-52.5°) and increased 3.4° (9%) (range -2° to 7.4°) postoperatively (P < 0.001). Segmental lordosis increased, on average, by 4° from a weighted average of 8.3° preoperatively (range -0.8° to 15.8°) to 11.2° at postoperative time points (range -0.2° to 22.8°) (P < 0.001) in 1182 patient from 24 study cohorts. Simple linear regression revealed a significant relationship between preoperative lumbar lordosis and change in lumbar lordosis (r = 0.413; P = 0.003), wherein lower preoperative lumbar lordosis predicted a greater increase in postoperative lumbar lordosis. Significant gains in both weighted average lumbar lordosis and segmental lordosis were seen following MIS interbody fusion. None of the segmental lordosis cohorts and only two of the 19 lumbar lordosis cohorts showed decreases in lordosis postoperatively. These results suggest that MIS approaches are able to impact regional and local segmental alignment and that preoperative patient factors can impact the extent of correction gained (preserving vs. restoring alignment). 4.
A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images.
Vázquez, David; Bernal, Jorge; Sánchez, F Javier; Fernández-Esparrach, Gloria; López, Antonio M; Romero, Adriana; Drozdzal, Michal; Courville, Aaron
2017-01-01
Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs). We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.
Design of multi-body Lambert type orbits with specified departure and arrival positions
NASA Astrophysics Data System (ADS)
Ishii, Nobuaki; Kawaguchi, Jun'ichiro; Matsuo, Hiroki
1991-10-01
A new procedure for designing a multi-body Lambert type orbit comprising a multiple swingby process is developed, aiming at relieving a numerical difficulty inherent to a highly nonlinear swingby mechanism. The proposed algorithm, Recursive Multi-Step Linearization, first divides a whole orbit into several trajectory segments. Then, with a maximum use of piecewised transition matrices, a segmentized orbit is repeatedly upgraded until an approximated orbit initially based on a patched conics method eventually converges. In application to the four body earth-moon system with sun's gravitation, one of the double lunar swingby orbits including 12 lunar swingbys is successfully designed without any velocity mismatch.
Sample selection in foreign similarity regions for multicrop experiments
NASA Technical Reports Server (NTRS)
Malin, J. T. (Principal Investigator)
1981-01-01
The selection of sample segments in the U.S. foreign similarity regions for development of proportion estimation procedures and error modeling for Argentina, Australia, Brazil, and USSR in AgRISTARS is described. Each sample was chosen to be similar in crop mix to the corresponding indicator region sample. Data sets, methods of selection, and resulting samples are discussed.
Arabic handwritten: pre-processing and segmentation
NASA Astrophysics Data System (ADS)
Maliki, Makki; Jassim, Sabah; Al-Jawad, Naseer; Sellahewa, Harin
2012-06-01
This paper is concerned with pre-processing and segmentation tasks that influence the performance of Optical Character Recognition (OCR) systems and handwritten/printed text recognition. In Arabic, these tasks are adversely effected by the fact that many words are made up of sub-words, with many sub-words there associated one or more diacritics that are not connected to the sub-word's body; there could be multiple instances of sub-words overlap. To overcome these problems we investigate and develop segmentation techniques that first segment a document into sub-words, link the diacritics with their sub-words, and removes possible overlapping between words and sub-words. We shall also investigate two approaches for pre-processing tasks to estimate sub-words baseline, and to determine parameters that yield appropriate slope correction, slant removal. We shall investigate the use of linear regression on sub-words pixels to determine their central x and y coordinates, as well as their high density part. We also develop a new incremental rotation procedure to be performed on sub-words that determines the best rotation angle needed to realign baselines. We shall demonstrate the benefits of these proposals by conducting extensive experiments on publicly available databases and in-house created databases. These algorithms help improve character segmentation accuracy by transforming handwritten Arabic text into a form that could benefit from analysis of printed text.
Computer vision based nacre thickness measurement of Tahitian pearls
NASA Astrophysics Data System (ADS)
Loesdau, Martin; Chabrier, Sébastien; Gabillon, Alban
2017-03-01
The Tahitian Pearl is the most valuable export product of French Polynesia contributing with over 61 million Euros to more than 50% of the total export income. To maintain its excellent reputation on the international market, an obligatory quality control for every pearl deemed for exportation has been established by the local government. One of the controlled quality parameters is the pearls nacre thickness. The evaluation is currently done manually by experts that are visually analyzing X-ray images of the pearls. In this article, a computer vision based approach to automate this procedure is presented. Even though computer vision based approaches for pearl nacre thickness measurement exist in the literature, the very specific features of the Tahitian pearl, namely the large shape variety and the occurrence of cavities, have so far not been considered. The presented work closes the. Our method consists of segmenting the pearl from X-ray images with a model-based approach, segmenting the pearls nucleus with an own developed heuristic circle detection and segmenting possible cavities with region growing. Out of the obtained boundaries, the 2-dimensional nacre thickness profile can be calculated. A certainty measurement to consider imaging and segmentation imprecisions is included in the procedure. The proposed algorithms are tested on 298 manually evaluated Tahitian pearls, showing that it is generally possible to automatically evaluate the nacre thickness of Tahitian pearls with computer vision. Furthermore the results show that the automatic measurement is more precise and faster than the manual one.
Automated identification of the lung contours in positron emission tomography
NASA Astrophysics Data System (ADS)
Nery, F.; Silvestre Silva, J.; Ferreira, N. C.; Caramelo, F. J.; Faustino, R.
2013-03-01
Positron Emission Tomography (PET) is a nuclear medicine imaging technique that permits to analyze, in three dimensions, the physiological processes in vivo. One of the areas where PET has demonstrated its advantages is in the staging of lung cancer, where it offers better sensitivity and specificity than other techniques such as CT. On the other hand, accurate segmentation, an important procedure for Computer Aided Diagnostics (CAD) and automated image analysis, is a challenging task given the low spatial resolution and the high noise that are intrinsic characteristics of PET images. This work presents an algorithm for the segmentation of lungs in PET images, to be used in CAD and group analysis in a large patient database. The lung boundaries are automatically extracted from a PET volume through the application of a marker-driven watershed segmentation procedure which is robust to the noise. In order to test the effectiveness of the proposed method, we compared the segmentation results in several slices using our approach with the results obtained from manual delineation. The manual delineation was performed by nuclear medicine physicians that used a software routine that we developed specifically for this task. To quantify the similarity between the contours obtained from the two methods, we used figures of merit based on region and also on contour definitions. Results show that the performance of the algorithm was similar to the performance of human physicians. Additionally, we found that the algorithm-physician agreement is similar (statistically significant) to the inter-physician agreement.
Line fiducial material and thickness considerations for ultrasound calibration
NASA Astrophysics Data System (ADS)
Ameri, Golafsoun; McLeod, A. J.; Baxter, John S. H.; Chen, Elvis C. S.; Peters, Terry M.
2015-03-01
Ultrasound calibration is a necessary procedure in many image-guided interventions, relating the position of tools and anatomical structures in the ultrasound image to a common coordinate system. This is a necessary component of augmented reality environments in image-guided interventions as it allows for a 3D visualization where other surgical tools outside the imaging plane can be found. Accuracy of ultrasound calibration fundamentally affects the total accuracy of this interventional guidance system. Many ultrasound calibration procedures have been proposed based on a variety of phantom materials and geometries. These differences lead to differences in representation of the phantom on the ultrasound image which subsequently affect the ability to accurately and automatically segment the phantom. For example, taut wires are commonly used as line fiducials in ultrasound calibration. However, at large depths or oblique angles, the fiducials appear blurred and smeared in ultrasound images making it hard to localize their cross-section with the ultrasound image plane. Intuitively, larger diameter phantoms with lower echogenicity are more accurately segmented in ultrasound images in comparison to highly reflective thin phantoms. In this work, an evaluation of a variety of calibration phantoms with different geometrical and material properties for the phantomless calibration procedure was performed. The phantoms used in this study include braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. Conventional B-mode and synthetic aperture images of the phantoms at different positions were obtained. The phantoms were automatically segmented from the ultrasound images using an ellipse fitting algorithm, the centroid of which is subsequently used as a fiducial for calibration. Calibration accuracy was evaluated for these procedures based on the leave-one-out target registration error. It was shown that larger diameter phantoms with lower echogenicity are more accurately segmented in comparison to highly reflective thin phantoms. This improvement in segmentation accuracy leads to a lower fiducial localization error, which ultimately results in low target registration error. This would have a profound effect on calibration procedures and the feasibility of different calibration procedures in the context of image-guided procedures.
NASA Astrophysics Data System (ADS)
Klein, E.; Masson, F.; Duputel, Z.; Yavasoglu, H.; Agram, P. S.
2016-12-01
Over the last two decades, the densification of GPS networks and the development of new radar satellites offered an unprecedented opportunity to study crustal deformation due to faulting. Yet, submarine strike slip fault segments remain a major issue, especially when the landscape appears unfavorable to the use of SAR measurements. It is the case of the North Anatolian fault segments located in the Main Marmara Sea, that remain unbroken ever since the Mw7.4 earthquake of Izmit in 1999, which ended a eastward migrating seismic sequence of Mw > 7 earthquakes. Located directly offshore Istanbul, evaluation of seismic hazard appears capital. But a strong controversy remains over whether these segments are accumulating strain and are likely to experience a major earthquake, or are creeping, resulting both from the simplicity of current geodetic models and the scarcity of geodetic data. We indeed show that 2D infinite fault models cannot account for the complexity of the Marmara fault segments. But current geodetic data in the western region of Istanbul are also insufficient to invert for the coupling using a 3D geometry of the fault. Therefore, we implement a global optimization procedure aiming at identifying the most favorable distribution of GPS stations to explore the strain accumulation. We present here the results of this procedure that allows to determine both the optimal number and location of the new stations. We show that a denser terrestrial survey network can indeed locally improve the resolution on the shallower part of the fault, even more efficiently with permanent stations. But data closer from the fault, only possible by submarine measurements, remain necessary to properly constrain the fault behavior and its potential along strike coupling variations.
Automated detection of videotaped neonatal seizures based on motion segmentation methods.
Karayiannis, Nicolaos B; Tao, Guozhi; Frost, James D; Wise, Merrill S; Hrachovy, Richard A; Mizrahi, Eli M
2006-07-01
This study was aimed at the development of a seizure detection system by training neural networks using quantitative motion information extracted by motion segmentation methods from short video recordings of infants monitored for seizures. The motion of the infants' body parts was quantified by temporal motion strength signals extracted from video recordings by motion segmentation methods based on optical flow computation. The area of each frame occupied by the infants' moving body parts was segmented by direct thresholding, by clustering of the pixel velocities, and by clustering the motion parameters obtained by fitting an affine model to the pixel velocities. The computational tools and procedures developed for automated seizure detection were tested and evaluated on 240 short video segments selected and labeled by physicians from a set of video recordings of 54 patients exhibiting myoclonic seizures (80 segments), focal clonic seizures (80 segments), and random infant movements (80 segments). The experimental study described in this paper provided the basis for selecting the most effective strategy for training neural networks to detect neonatal seizures as well as the decision scheme used for interpreting the responses of the trained neural networks. Depending on the decision scheme used for interpreting the responses of the trained neural networks, the best neural networks exhibited sensitivity above 90% or specificity above 90%. The best among the motion segmentation methods developed in this study produced quantitative features that constitute a reliable basis for detecting myoclonic and focal clonic neonatal seizures. The performance targets of this phase of the project may be achieved by combining the quantitative features described in this paper with those obtained by analyzing motion trajectory signals produced by motion tracking methods. A video system based upon automated analysis potentially offers a number of advantages. Infants who are at risk for seizures could be monitored continuously using relatively inexpensive and non-invasive video techniques that supplement direct observation by nursery personnel. This would represent a major advance in seizure surveillance and offers the possibility for earlier identification of potential neurological problems and subsequent intervention.
ERIC Educational Resources Information Center
Lay, Robert S.
The advantages and disadvantages of new software for market segmentation analysis are discussed, and the application of this new, chi-square based procedure (CHAID), is illustrated. A comparison is presented of an earlier, binary segmentation technique (THAID) and a multiple discriminant analysis. It is suggested that CHAID is superior to earlier…
Infant Word Segmentation Revisited: Edge Alignment Facilitates Target Extraction
ERIC Educational Resources Information Center
Seidl, Amanda; Johnson, Elizabeth K.
2006-01-01
In a landmark study, Jusczyk and Aslin (1995 ) demonstrated that English-learning infants are able to segment words from continuous speech at 7.5 months of age. In the current study, we explored the possibility that infants segment words from the edges of utterances more readily than the middle of utterances. The same procedure was used as in…
Lu, Zipeng; Yin, Jie; Wei, Jishu; Dai, Cuncai; Wu, Junli; Gao, Wentao; Xu, Qing; Dai, Hao; Li, Qiang; Guo, Feng; Chen, Jianmin; Xi, Chunhua; Wu, Pengfei; Zhang, Kai; Jiang, Kuirong; Miao, Yi
2016-11-01
Middle-segment preserving pancreatectomy (MPP) is a novel procedure for treating multifocal lesions of the pancreas while preserving pancreatic function. However, long-term pancreatic function after this procedure remains unclear.The aims of this current study are to investigate short- and long-term outcomes, especially long-term pancreatic endocrine function, after MPP.From September 2011 to December 2015, 7 patients underwent MPP in our institution, and 5 cases with long-term outcomes were further analyzed in a retrospective manner. Percentage of tissue preservation was calculated using computed tomography volumetry. Serum insulin and C-peptide levels after oral glucose challenge were evaluated in 5 patients. Beta-cell secreting function including modified homeostasis model assessment of beta-cell function (HOMA2-beta), area under the curve (AUC) for C-peptide, and C-peptide index were evaluated and compared with those after pancreaticoduodenectomy (PD) and total pancreatectomy. Exocrine function was assessed based on questionnaires.Our case series included 3 women and 2 men, with median age of 50 (37-81) years. Four patients underwent pylorus-preserving PD together with distal pancreatectomy (DP), including 1 with spleen preserved. The remaining patient underwent Beger procedure and spleen-preserving DP. Median operation time and estimated intraoperative blood loss were 330 (250-615) min and 800 (400-5500) mL, respectively. Histological examination revealed 3 cases of metastatic lesion to the pancreas, 1 case of chronic pancreatitis, and 1 neuroendocrine tumor. Major postoperative complications included 3 cases of delayed gastric emptying and 2 cases of postoperative pancreatic fistula. Imaging studies showed that segments representing 18.2% to 39.5% of the pancreas with good blood supply had been preserved. With a median 35.0 months of follow-ups on pancreatic functions, only 1 patient developed new-onset diabetes mellitus of the 4 preoperatively euglycemic patients. Beta-cell function parameters in this group of patients were quite comparable to those after Whipple procedure, and seemed better than those after total pancreatectomy. No symptoms of hypoglycemia were identified in any patient, although half of the patients reported symptoms of exocrine insufficiency.In conclusion, MPP is a feasible and effective procedure for middle-segment sparing multicentric lesions in the pancreas, and patients exhibit satisfied endocrine function after surgery.
Efficiency Benefits Using the Terminal Area Precision Scheduling and Spacing System
NASA Technical Reports Server (NTRS)
Thipphavong, Jane; Swenson, Harry N.; Lin, Paul; Seo, Anthony Y.; Bagasol, Leonard N.
2011-01-01
NASA has developed a capability for terminal area precision scheduling and spacing (TAPSS) to increase the use of fuel-efficient arrival procedures during periods of traffic congestion at a high-density airport. Sustained use of fuel-efficient procedures throughout the entire arrival phase of flight reduces overall fuel burn, greenhouse gas emissions and noise pollution. The TAPSS system is a 4D trajectory-based strategic planning and control tool that computes schedules and sequences for arrivals to facilitate optimal profile descents. This paper focuses on quantifying the efficiency benefits associated with using the TAPSS system, measured by reduction of level segments during aircraft descent and flight distance and time savings. The TAPSS system was tested in a series of human-in-the-loop simulations and compared to current procedures. Compared to the current use of the TMA system, simulation results indicate a reduction of total level segment distance by 50% and flight distance and time savings by 7% in the arrival portion of flight (200 nm from the airport). The TAPSS system resulted in aircraft maintaining continuous descent operations longer and with more precision, both achieved under heavy traffic demand levels.
Hybrid region merging method for segmentation of high-resolution remote sensing images
NASA Astrophysics Data System (ADS)
Zhang, Xueliang; Xiao, Pengfeng; Feng, Xuezhi; Wang, Jiangeng; Wang, Zuo
2014-12-01
Image segmentation remains a challenging problem for object-based image analysis. In this paper, a hybrid region merging (HRM) method is proposed to segment high-resolution remote sensing images. HRM integrates the advantages of global-oriented and local-oriented region merging strategies into a unified framework. The globally most-similar pair of regions is used to determine the starting point of a growing region, which provides an elegant way to avoid the problem of starting point assignment and to enhance the optimization ability for local-oriented region merging. During the region growing procedure, the merging iterations are constrained within the local vicinity, so that the segmentation is accelerated and can reflect the local context, as compared with the global-oriented method. A set of high-resolution remote sensing images is used to test the effectiveness of the HRM method, and three region-based remote sensing image segmentation methods are adopted for comparison, including the hierarchical stepwise optimization (HSWO) method, the local-mutual best region merging (LMM) method, and the multiresolution segmentation (MRS) method embedded in eCognition Developer software. Both the supervised evaluation and visual assessment show that HRM performs better than HSWO and LMM by combining both their advantages. The segmentation results of HRM and MRS are visually comparable, but HRM can describe objects as single regions better than MRS, and the supervised and unsupervised evaluation results further prove the superiority of HRM.
Software and Algorithms for Biomedical Image Data Processing and Visualization
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Lambert, James; Lam, Raymond
2004-01-01
A new software equipped with novel image processing algorithms and graphical-user-interface (GUI) tools has been designed for automated analysis and processing of large amounts of biomedical image data. The software, called PlaqTrak, has been specifically used for analysis of plaque on teeth of patients. New algorithms have been developed and implemented to segment teeth of interest from surrounding gum, and a real-time image-based morphing procedure is used to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The PlaqTrak system integrates these components into a single software suite with an easy-to-use GUI (see Figure 1) that allows users to do an end-to-end run of a patient s record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image. The automated and accurate processing of the captured images to segment each tooth [see Figure 2(a)] and then detect plaque on a tooth-by-tooth basis is a critical component of the PlaqTrak system to do clinical trials and analysis with minimal human intervention. These features offer distinct advantages over other competing systems that analyze groups of teeth or synthetic teeth. PlaqTrak divides each segmented tooth into eight regions using an advanced graphics morphing procedure [see results on a chipped tooth in Figure 2(b)], and a pattern recognition classifier is then used to locate plaque [red regions in Figure 2(d)] and enamel regions. The morphing allows analysis within regions of teeth, thereby facilitating detailed statistical analysis such as the amount of plaque present on the biting surfaces on teeth. This software system is applicable to a host of biomedical applications, such as cell analysis and life detection, or robotic applications, such as product inspection or assembly of parts in space and industry.
Lee, Major K; Gao, Feng; Strasberg, Steven M
2016-08-01
Liver resections have classically been distinguished as "minor" or "major," based on number of segments removed. This is flawed because the number of segments resected alone does not convey the complexity of a resection. We recently developed a 3-tiered classification for the complexity of liver resections based on utility weighting by experts. This study aims to complete the earlier classification and to illustrate its application. Two surveys were administered to expert liver surgeons. Experts were asked to rate the difficulty of various open liver resections on a scale of 1 to 10. Statistical methods were then used to develop a complexity score for each procedure. Sixty-six of 135 (48.9%) surgeons responded to the earlier survey, and 66 of 122 (54.1%) responded to the current survey. In all, 19 procedures were rated. The lowest mean score of 1.36 (indicating least difficult) was given to peripheral wedge resection. Right hepatectomy with IVC reconstruction was deemed most difficult, with a score of 9.35. Complexity scores were similar for 9 procedures present in both surveys. Caudate resection, hepaticojejunostomy, and vascular reconstruction all increased the complexity of standard resections significantly. These data permit quantitative assessment of the difficulty of a variety of liver resections. The complexity scores generated allow for separation of liver resections into 3 categories of complexity (low complexity, medium complexity, and high complexity) on a quantitative basis. This provides a more accurate representation of the complexity of procedures in comparative studies. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Accurate determination of segmented X-ray detector geometry
Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A.; Chapman, Henry N.; Barty, Anton
2015-01-01
Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the results of experiments. PMID:26561117
Automatic extraction of numeric strings in unconstrained handwritten document images
NASA Astrophysics Data System (ADS)
Haji, M. Mehdi; Bui, Tien D.; Suen, Ching Y.
2011-01-01
Numeric strings such as identification numbers carry vital pieces of information in documents. In this paper, we present a novel algorithm for automatic extraction of numeric strings in unconstrained handwritten document images. The algorithm has two main phases: pruning and verification. In the pruning phase, the algorithm first performs a new segment-merge procedure on each text line, and then using a new regularity measure, it prunes all sequences of characters that are unlikely to be numeric strings. The segment-merge procedure is composed of two modules: a new explicit character segmentation algorithm which is based on analysis of skeletal graphs and a merging algorithm which is based on graph partitioning. All the candidate sequences that pass the pruning phase are sent to a recognition-based verification phase for the final decision. The recognition is based on a coarse-to-fine approach using probabilistic RBF networks. We developed our algorithm for the processing of real-world documents where letters and digits may be connected or broken in a document. The effectiveness of the proposed approach is shown by extensive experiments done on a real-world database of 607 documents which contains handwritten, machine-printed and mixed documents with different types of layouts and levels of noise.
Mainela-Arnold, Elina; Evans, Julia L.
2014-01-01
This study tested the predictions of the procedural deficit hypothesis by investigating the relationship between sequential statistical learning and two aspects of lexical ability, lexical-phonological and lexical-semantic, in children with and without specific language impairment (SLI). Participants included 40 children (ages 8;5–12;3), 20 children with SLI and 20 with typical development. Children completed Saffran’s statistical word segmentation task, a lexical-phonological access task (gating task), and a word definition task. Poor statistical learners were also poor at managing lexical-phonological competition during the gating task. However, statistical learning was not a significant predictor of semantic richness in word definitions. The ability to track statistical sequential regularities may be important for learning the inherently sequential structure of lexical-phonology, but not as important for learning lexical-semantic knowledge. Consistent with the procedural/declarative memory distinction, the brain networks associated with the two types of lexical learning are likely to have different learning properties. PMID:23425593
Biomedical sensing and imaging for the anterior segment of the eye
NASA Astrophysics Data System (ADS)
Eom, Tae Joong; Yoo, Young-Sik; Lee, Yong-Eun; Kim, Beop-Min; Joo, Choun-Ki
2015-07-01
Eye is an optical system composed briefly of cornea, lens, and retina. Ophthalmologists can diagnose status of patient's eye from information provided by optical sensors or images as well as from history taking or physical examinations. Recently, we developed a prototype of optical coherence tomography (OCT) image guided femtosecond laser cataract surgery system. The system combined a swept-source OCT and a femtosecond (fs) laser and afford the 2D and 3D structure information to increase the efficiency and safety of the cataract procedure. The OCT imaging range was extended to achieve the 3D image from the cornea to lens posterior. A prototype of OCT image guided fs laser cataract surgery system. The surgeons can plan the laser illumination range for the nuclear division and segmentation, and monitor the whole cataract surgery procedure using the real time OCT. The surgery system was demonstrated with an extracted pig eye and in vivo rabbit eye to verify the system performance and stability.
Hyperspectral image segmentation of the common bile duct
NASA Astrophysics Data System (ADS)
Samarov, Daniel; Wehner, Eleanor; Schwarz, Roderich; Zuzak, Karel; Livingston, Edward
2013-03-01
Over the course of the last several years hyperspectral imaging (HSI) has seen increased usage in biomedicine. Within the medical field in particular HSI has been recognized as having the potential to make an immediate impact by reducing the risks and complications associated with laparotomies (surgical procedures involving large incisions into the abdominal wall) and related procedures. There are several ongoing studies focused on such applications. Hyperspectral images were acquired during pancreatoduodenectomies (commonly referred to as Whipple procedures), a surgical procedure done to remove cancerous tumors involving the pancreas and gallbladder. As a result of the complexity of the local anatomy, identifying where the common bile duct (CBD) is can be difficult, resulting in comparatively high incidents of injury to the CBD and associated complications. It is here that HSI has the potential to help reduce the risk of such events from happening. Because the bile contained within the CBD exhibits a unique spectral signature, we are able to utilize HSI segmentation algorithms to help in identifying where the CBD is. In the work presented here we discuss approaches to this segmentation problem and present the results.
A state-of-the-art review on segmentation algorithms in intravascular ultrasound (IVUS) images.
Katouzian, Amin; Angelini, Elsa D; Carlier, Stéphane G; Suri, Jasjit S; Navab, Nassir; Laine, Andrew F
2012-09-01
Over the past two decades, intravascular ultrasound (IVUS) image segmentation has remained a challenge for researchers while the use of this imaging modality is rapidly growing in catheterization procedures and in research studies. IVUS provides cross-sectional grayscale images of the arterial wall and the extent of atherosclerotic plaques with high spatial resolution in real time. In this paper, we review recently developed image processing methods for the detection of media-adventitia and luminal borders in IVUS images acquired with different transducers operating at frequencies ranging from 20 to 45 MHz. We discuss methodological challenges, lack of diversity in reported datasets, and weaknesses of quantification metrics that make IVUS segmentation still an open problem despite all efforts. In conclusion, we call for a common reference database, validation metrics, and ground-truth definition with which new and existing algorithms could be benchmarked.
TOPEX Microwave Radiometer - Thermal design verification test and analytical model validation
NASA Technical Reports Server (NTRS)
Lin, Edward I.
1992-01-01
The testing of the TOPEX Microwave Radiometer (TMR) is described in terms of hardware development based on the modeling and thermal vacuum testing conducted. The TMR and the vacuum-test facility are described, and the thermal verification test includes a hot steady-state segment, a cold steady-state segment, and a cold survival mode segment totalling 65 hours. A graphic description is given of the test history which is related temperature tracking, and two multinode TMR test-chamber models are compared to the test results. Large discrepancies between the test data and the model predictions are attributed to contact conductance, effective emittance from the multilayer insulation, and heat leaks related to deviations from the flight configuration. The TMR thermal testing/modeling effort is shown to provide technical corrections for the procedure outlined, and the need for validating predictive models is underscored.
Limb lengthening in Turner syndrome.
Noonan, K. J.; Leyes, M.; Forriol, F.
1997-01-01
We report the results and complications of eight consecutive patients who underwent bilateral tibial lengthenings for dwarfism associated with Turner syndrome. Lengthening was performed via distraction osteogenesis with monolateral external fixation. Tibias were lengthened an average distance of 9.2 centimeters or 33 percent of the original tibial length. The average total treatment time was 268 days. The overall complication rate was 169 percent for each tibia lengthened and each segment required an average of 1.7 additional procedures. Seven cases (44 percent) required Achilles tendon lengthening and nine cases (56 percent) developed angulation before or after fixator removal; six of these segments required corrective osteotomy for axial malalignment. Two cases (12.5 percent) developed distraction site nonunion and required plating and bone grafting. From this series we conclude that tibial lengthening via distraction osteogenesis can be used to treat disproportionate short stature in patients with Turner syndrome. However, the benefit of a cosmetic increase in height may not compensate for the high complication rate. Efforts to determine the psychosocial and functional benefits of limb lengthening in patients with short stature is necessary to determine the true cost-benefit ratio of this procedure. Images Figure 1a Figure 1b Figure 1c PMID:9234980
Alkaduhimi, Hassanin; van den Bekerom, Michel P J; van Deurzen, Derek F P
2017-06-01
Posterior shoulder dislocations are accompanied by high forces and can result in an anteromedial humeral head impression fracture called a reverse Hill-Sachs lesion. This reverse Hill-Sachs lesion can result in serious complications including posttraumatic osteoarthritis, posterior dislocations, osteonecrosis, persistent joint stiffness, and loss of shoulder function. Treatment is challenging and depends on the amount of bone loss. Several techniques have been reported to describe the surgical treatment of lesions larger than 20%. However, there is still limited evidence with regard to the optimal procedure. Favorable results have been reported by performing segmental reconstruction of the reverse Hill-Sachs lesion with bone allograft. Although the procedure of segmental reconstruction has been used in several studies, its technique has not yet been well described in detail. In this report we propose a step-by-step description of the technique how to perform a segmental reconstruction of a reverse Hill-Sachs defect.
General Staining and Segmentation Procedures for High Content Imaging and Analysis.
Chambers, Kevin M; Mandavilli, Bhaskar S; Dolman, Nick J; Janes, Michael S
2018-01-01
Automated quantitative fluorescence microscopy, also known as high content imaging (HCI), is a rapidly growing analytical approach in cell biology. Because automated image analysis relies heavily on robust demarcation of cells and subcellular regions, reliable methods for labeling cells is a critical component of the HCI workflow. Labeling of cells for image segmentation is typically performed with fluorescent probes that bind DNA for nuclear-based cell demarcation or with those which react with proteins for image analysis based on whole cell staining. These reagents, along with instrument and software settings, play an important role in the successful segmentation of cells in a population for automated and quantitative image analysis. In this chapter, we describe standard procedures for labeling and image segmentation in both live and fixed cell samples. The chapter will also provide troubleshooting guidelines for some of the common problems associated with these aspects of HCI.
Baker, J T; Solomon, S
1976-01-01
1. The ability of maturing rats to excrete a sodium load was studied by micropuncture and clearance procedures. 2. During control conditions, no change of glomerular filtration rate or sodium excretion was observed for the time period of the entire procedure (P greater than 0-20). During the infusion of hypertonic (4%) sodium chloride, fractional sodium excretion was 0-08 +/- 0-01 in rats 21-30 days old and 0-14 +/- 0-01 (P less than 0-01) in adults. However, the depression of proximal tubular water re-absorption was equal in both groups (P greater than 0-20). 3. Proximal glomerulotubular balance for water re-absorption was similar in all groups (P less than 0-20). Since end proximal tubular water excretion and depression of fractional water excretion were the same in all animals, differences of urinary sodium excretion during development are probably due to differences of function of segments beyond the proximal tubule during development. 4. Fractional potassium excretion was reduced in young rats (0-17 +/- 0-04) during hypertonic sodium chloride infusion, compared to adults (0-24 +/- 0-01, P less than 0-05). 5. Passage time of fast green through cortical segments in seconds is prolonged in young rats during control conditions. Similar decreases of passage time were seen in all groups during hypertonic sodium chloride infusion. No segmental differences of passage time were seen during developmental. 6. No difference in the relationship between fractional sodium and water excretion was seen during development of the renal response to hypertonic sodium chloride infusion. Thus, altered sensitivity to sodium chloride osmotic diuresis does not exist during maturation in rats. PMID:945839
Cryo-EM Structure Determination Using Segmented Helical Image Reconstruction.
Fromm, S A; Sachse, C
2016-01-01
Treating helices as single-particle-like segments followed by helical image reconstruction has become the method of choice for high-resolution structure determination of well-ordered helical viruses as well as flexible filaments. In this review, we will illustrate how the combination of latest hardware developments with optimized image processing routines have led to a series of near-atomic resolution structures of helical assemblies. Originally, the treatment of helices as a sequence of segments followed by Fourier-Bessel reconstruction revealed the potential to determine near-atomic resolution structures from helical specimens. In the meantime, real-space image processing of helices in a stack of single particles was developed and enabled the structure determination of specimens that resisted classical Fourier helical reconstruction and also facilitated high-resolution structure determination. Despite the progress in real-space analysis, the combination of Fourier and real-space processing is still commonly used to better estimate the symmetry parameters as the imposition of the correct helical symmetry is essential for high-resolution structure determination. Recent hardware advancement by the introduction of direct electron detectors has significantly enhanced the image quality and together with improved image processing procedures has made segmented helical reconstruction a very productive cryo-EM structure determination method. © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Sawdy, D. T.; Beckemeyer, R. J.; Patterson, J. D.
1976-01-01
Results are presented from detailed analytical studies made to define methods for obtaining improved multisegment lining performance by taking advantage of relative placement of each lining segment. Properly phased liner segments reflect and spatially redistribute the incident acoustic energy and thus provide additional attenuation. A mathematical model was developed for rectangular ducts with uniform mean flow. Segmented acoustic fields were represented by duct eigenfunction expansions, and mode-matching was used to ensure continuity of the total field. Parametric studies were performed to identify attenuation mechanisms and define preliminary liner configurations. An optimization procedure was used to determine optimum liner impedance values for a given total lining length, Mach number, and incident modal distribution. Optimal segmented liners are presented and it is shown that, provided the sound source is well-defined and flow environment is known, conventional infinite duct optimum attenuation rates can be improved. To confirm these results, an experimental program was conducted in a laboratory test facility. The measured data are presented in the form of analytical-experimental correlations. Excellent agreement between theory and experiment verifies and substantiates the analytical prediction techniques. The results indicate that phased liners may be of immediate benefit in the development of improved aircraft exhaust duct noise suppressors.
Systems Maintenance Automated Repair Tasks (SMART)
NASA Technical Reports Server (NTRS)
Schuh, Joseph; Mitchell, Brent; Locklear, Louis; Belson, Martin A.; Al-Shihabi, Mary Jo Y.; King, Nadean; Norena, Elkin; Hardin, Derek
2010-01-01
SMART is a uniform automated discrepancy analysis and repair-authoring platform that improves technical accuracy and timely delivery of repair procedures for a given discrepancy (see figure a). SMART will minimize data errors, create uniform repair processes, and enhance the existing knowledge base of engineering repair processes. This innovation is the first tool developed that links the hardware specification requirements with the actual repair methods, sequences, and required equipment. SMART is flexibly designed to be useable by multiple engineering groups requiring decision analysis, and by any work authorization and disposition platform (see figure b). The organizational logic creates the link between specification requirements of the hardware, and specific procedures required to repair discrepancies. The first segment in the SMART process uses a decision analysis tree to define all the permutations between component/ subcomponent/discrepancy/repair on the hardware. The second segment uses a repair matrix to define what the steps and sequences are for any repair defined in the decision tree. This segment also allows for the selection of specific steps from multivariable steps. SMART will also be able to interface with outside databases and to store information from them to be inserted into the repair-procedure document. Some of the steps will be identified as optional, and would only be used based on the location and the current configuration of the hardware. The output from this analysis would be sent to a work authoring system in the form of a predefined sequence of steps containing required actions, tools, parts, materials, certifications, and specific requirements controlling quality, functional requirements, and limitations.
Statistical segmentation of multidimensional brain datasets
NASA Astrophysics Data System (ADS)
Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro
2001-07-01
This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.
Besson, Florent L; Henry, Théophraste; Meyer, Céline; Chevance, Virgile; Roblot, Victoire; Blanchet, Elise; Arnould, Victor; Grimon, Gilles; Chekroun, Malika; Mabille, Laurence; Parent, Florence; Seferian, Andrei; Bulifon, Sophie; Montani, David; Humbert, Marc; Chaumet-Riffaud, Philippe; Lebon, Vincent; Durand, Emmanuel
2018-04-03
Purpose To assess the performance of the ITK-SNAP software for fluorodeoxyglucose (FDG) positron emission tomography (PET) segmentation of complex-shaped lung tumors compared with an optimized, expert-based manual reference standard. Materials and Methods Seventy-six FDG PET images of thoracic lesions were retrospectively segmented by using ITK-SNAP software. Each tumor was manually segmented by six raters to generate an optimized reference standard by using the simultaneous truth and performance level estimate algorithm. Four raters segmented 76 FDG PET images of lung tumors twice by using ITK-SNAP active contour algorithm. Accuracy of ITK-SNAP procedure was assessed by using Dice coefficient and Hausdorff metric. Interrater and intrarater reliability were estimated by using intraclass correlation coefficients of output volumes. Finally, the ITK-SNAP procedure was compared with currently recommended PET tumor delineation methods on the basis of thresholding at 41% volume of interest (VOI; VOI 41 ) and 50% VOI (VOI 50 ) of the tumor's maximal metabolism intensity. Results Accuracy estimates for the ITK-SNAP procedure indicated a Dice coefficient of 0.83 (95% confidence interval: 0.77, 0.89) and a Hausdorff distance of 12.6 mm (95% confidence interval: 9.82, 15.32). Interrater reliability was an intraclass correlation coefficient of 0.94 (95% confidence interval: 0.91, 0.96). The intrarater reliabilities were intraclass correlation coefficients above 0.97. Finally, VOI 41 and VOI 50 accuracy metrics were as follows: Dice coefficient, 0.48 (95% confidence interval: 0.44, 0.51) and 0.34 (95% confidence interval: 0.30, 0.38), respectively, and Hausdorff distance, 25.6 mm (95% confidence interval: 21.7, 31.4) and 31.3 mm (95% confidence interval: 26.8, 38.4), respectively. Conclusion ITK-SNAP is accurate and reliable for active-contour-based segmentation of heterogeneous thoracic PET tumors. ITK-SNAP surpassed the recommended PET methods compared with ground truth manual segmentation. © RSNA, 2018.
First ALPPS procedure using a total robotic approach.
Vicente, E; Quijano, Y; Ielpo, B; Fabra, I
2016-12-01
ALPPS procedure is gaining interest. Indications and technical aspects of this technique are still under debate [1]. Only 4 totally laparoscopic ALPPS procedures have been described in the literature and none by robotic approach [2-4]. This video demonstrates the technical aspects of totally robotic ALPPS. A 58 year old man with sigmoid adenocarcinoma with multiple right liver metastases extended to segment IV and I underwent Xelox and 5 Fluoro-uracil neoadjuvancy. Preoperative CT volumetric scan showed a FLR/TLV (Future Liver Remnant/Total Liver Volume) of 28%. ALPPS totally robotic procedure was planned using the DaVinci Si. Tumor resection from the FLR (including segment I) is followed by parenchymal transection between the FLR and the diseased part of the liver with concomitant right portal vein ligation. Small branches to segment IV from left portal vein have been resected along the round ligament, at this step. The right biliary tract was resected as it was partially debilitated after its dissection as partially encircled by a metastasis at segment IV. Second stage was performed totally robotic on 13th postoperative days with a FLR/TLV of 40%. No strong adherences are found, making this stage much easer than open approach. During this step, right hepatic artery and right supra hepatic vein are resected. Finally, the specimen was retrieved inside a plastic bag through a Pfannenstiel incision. Postoperative pathology showed margins free from disease. ALPPS procedure performed by robotic approach could be a safe and feasible technique in experienced centers with advanced robotic skills. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cunningham, Charles E; Kostrzewa, Linda; Rimas, Heather; Chen, Yvonne; Deal, Ken; Blatz, Susan; Bowman, Alida; Buchanan, Don H; Calvert, Randy; Jennings, Barbara
2013-01-01
Patients value health service teams that function effectively. Organizational justice is linked to the performance, health, and emotional adjustment of the members of these teams. We used a discrete-choice conjoint experiment to study the organizational justice improvement preferences of pediatric health service providers. Using themes from a focus group with 22 staff, we composed 14 four-level organizational justice improvement attributes. A sample of 652 staff (76 % return) completed 30 choice tasks, each presenting three hospitals defined by experimentally varying the attribute levels. Latent class analysis yielded three segments. Procedural justice attributes were more important to the Decision Sensitive segment, 50.6 % of the sample. They preferred to contribute to and understand how all decisions were made and expected management to act promptly on more staff suggestions. Interactional justice attributes were more important to the Conduct Sensitive segment (38.5 %). A universal code of respectful conduct, consequences encouraging respectful interaction, and management's response when staff disagreed with them were more important to this segment. Distributive justice attributes were more important to the Benefit Sensitive segment, 10.9 % of the sample. Simulations predicted that, while Decision Sensitive (74.9 %) participants preferred procedural justice improvements, Conduct (74.6 %) and Benefit Sensitive (50.3 %) participants preferred interactional justice improvements. Overall, 97.4 % of participants would prefer an approach combining procedural and interactional justice improvements. Efforts to create the health service environments that patients value need to be comprehensive enough to address the preferences of segments of staff who are sensitive to different dimensions of organizational justice.
Frasson, L; Neubert, J; Reina, S; Oldfield, M; Davies, B L; Rodriguez Y Baena, F
2010-01-01
The popularity of minimally invasive surgical procedures is driving the development of novel, safer and more accurate surgical tools. In this context a multi-part probe for soft tissue surgery is being developed in the Mechatronics in Medicine Laboratory at Imperial College, London. This study reports an optimization procedure using finite element methods, for the identification of an interlock geometry able to limit the separation of the segments composing the multi-part probe. An optimal geometry was obtained and the corresponding three-dimensional finite element model validated experimentally. Simulation results are shown to be consistent with the physical experiments. The outcome of this study is an important step in the provision of a novel miniature steerable probe for surgery.
NASA Astrophysics Data System (ADS)
Huang, Xia; Li, Chunqiang; Xiao, Chuan; Sun, Wenqing; Qian, Wei
2017-03-01
The temporal focusing two-photon microscope (TFM) is developed to perform depth resolved wide field fluorescence imaging by capturing frames sequentially. However, due to strong nonignorable noises and diffraction rings surrounding particles, further researches are extremely formidable without a precise particle localization technique. In this paper, we developed a fully-automated scheme to locate particles positions with high noise tolerance. Our scheme includes the following procedures: noise reduction using a hybrid Kalman filter method, particle segmentation based on a multiscale kernel graph cuts global and local segmentation algorithm, and a kinematic estimation based particle tracking method. Both isolated and partial-overlapped particles can be accurately identified with removal of unrelated pixels. Based on our quantitative analysis, 96.22% isolated particles and 84.19% partial-overlapped particles were successfully detected.
Mastmeyer, André; Engelke, Klaus; Fuchs, Christina; Kalender, Willi A
2006-08-01
We have developed a new hierarchical 3D technique to segment the vertebral bodies in order to measure bone mineral density (BMD) with high trueness and precision in volumetric CT datasets. The hierarchical approach starts with a coarse separation of the individual vertebrae, applies a variety of techniques to segment the vertebral bodies with increasing detail and ends with the definition of an anatomic coordinate system for each vertebral body, relative to which up to 41 trabecular and cortical volumes of interest are positioned. In a pre-segmentation step constraints consisting of Boolean combinations of simple geometric shapes are determined that enclose each individual vertebral body. Bound by these constraints viscous deformable models are used to segment the main shape of the vertebral bodies. Volume growing and morphological operations then capture the fine details of the bone-soft tissue interface. In the volumes of interest bone mineral density and content are determined. In addition, in the segmented vertebral bodies geometric parameters such as volume or the length of the main axes of inertia can be measured. Intra- and inter-operator precision errors of the segmentation procedure were analyzed using existing clinical patient datasets. Results for segmented volume, BMD, and coordinate system position were below 2.0%, 0.6%, and 0.7%, respectively. Trueness was analyzed using phantom scans. The bias of the segmented volume was below 4%; for BMD it was below 1.5%. The long-term goal of this work is improved fracture prediction and patient monitoring in the field of osteoporosis. A true 3D segmentation also enables an accurate measurement of geometrical parameters that may augment the clinical value of a pure BMD analysis.
Closed loop problems in biomechanics. Part II--an optimization approach.
Vaughan, C L; Hay, J G; Andrews, J G
1982-01-01
A closed loop problem in biomechanics may be defined as a problem in which there are one or more closed loops formed by the human body in contact with itself or with an external system. Under certain conditions the problem is indeterminate--the unknown forces and torques outnumber the equations. Force transducing devices, which would help solve this problem, have serious drawbacks, and existing methods are inaccurate and non-general. The purposes of the present paper are (1) to develop a general procedure for solving closed loop problems; (2) to illustrate the application of the procedure; and (3) to examine the validity of the procedure. A mathematical optimization approach is applied to the solution of three different closed loop problems--walking up stairs, vertical jumping and cartwheeling. The following conclusions are drawn: (1) the method described is reasonably successful for predicting horizontal and vertical reaction forces at the distal segments although problems exist for predicting the points of application of these forces; (2) the results provide some support for the notion that the human neuromuscular mechanism attempts to minimize the joint torques and thus, to a certain degree, the amount of muscular effort; (3) in the validation procedure it is desirable to have a force device for each of the distal segments in contact with a fixed external system; and (4) the method is sufficiently general to be applied to all classes of closed loop problems.
NASA Astrophysics Data System (ADS)
Grochocka, M.
2013-12-01
Mobile laser scanning is dynamically developing measurement technology, which is becoming increasingly widespread in acquiring three-dimensional spatial information. Continuous technical progress based on the use of new tools, technology development, and thus the use of existing resources in a better way, reveals new horizons of extensive use of MLS technology. Mobile laser scanning system is usually used for mapping linear objects, and in particular the inventory of roads, railways, bridges, shorelines, shafts, tunnels, and even geometrically complex urban spaces. The measurement is done from the perspective of use of the object, however, does not interfere with the possibilities of movement and work. This paper presents the initial results of the segmentation data acquired by the MLS. The data used in this work was obtained as part of an inventory measurement infrastructure railway line. Measurement of point clouds was carried out using a profile scanners installed on the railway platform. To process the data, the tools of 'open source' Point Cloud Library was used. These tools allow to use templates of programming libraries. PCL is an open, independent project, operating on a large scale for processing 2D/3D image and point clouds. Software PCL is released under the terms of the BSD license (Berkeley Software Distribution License), which means it is a free for commercial and research use. The article presents a number of issues related to the use of this software and its capabilities. Segmentation data is based on applying the templates library pcl_ segmentation, which contains the segmentation algorithms to separate clusters. These algorithms are best suited to the processing point clouds, consisting of a number of spatially isolated regions. Template library performs the extraction of the cluster based on the fit of the model by the consensus method samples for various parametric models (planes, cylinders, spheres, lines, etc.). Most of the mathematical operation is carried out on the basis of Eigen library, a set of templates for linear algebra.
Segmentation for the enhancement of microcalcifications in digital mammograms.
Milosevic, Marina; Jankovic, Dragan; Peulic, Aleksandar
2014-01-01
Microcalcification clusters appear as groups of small, bright particles with arbitrary shapes on mammographic images. They are the earliest sign of breast carcinomas and their detection is the key for improving breast cancer prognosis. But due to the low contrast of microcalcifications and same properties as noise, it is difficult to detect microcalcification. This work is devoted to developing a system for the detection of microcalcification in digital mammograms. After removing noise from mammogram using the Discrete Wavelet Transformation (DWT), we first selected the region of interest (ROI) in order to demarcate the breast region on a mammogram. Segmenting region of interest represents one of the most important stages of mammogram processing procedure. The proposed segmentation method is based on a filtering using the Sobel filter. This process will identify the significant pixels, that belong to edges of microcalcifications. Microcalcifications were detected by increasing the contrast of the images obtained by applying Sobel operator. In order to confirm the effectiveness of this microcalcification segmentation method, the Support Vector Machine (SVM) and k-Nearest Neighborhood (k-NN) algorithm are employed for the classification task using cross-validation technique.
Fuzzy Markov random fields versus chains for multispectral image segmentation.
Salzenstein, Fabien; Collet, Christophe
2006-11-01
This paper deals with a comparison of recent statistical models based on fuzzy Markov random fields and chains for multispectral image segmentation. The fuzzy scheme takes into account discrete and continuous classes which model the imprecision of the hidden data. In this framework, we assume the dependence between bands and we express the general model for the covariance matrix. A fuzzy Markov chain model is developed in an unsupervised way. This method is compared with the fuzzy Markovian field model previously proposed by one of the authors. The segmentation task is processed with Bayesian tools, such as the well-known MPM (Mode of Posterior Marginals) criterion. Our goal is to compare the robustness and rapidity for both methods (fuzzy Markov fields versus fuzzy Markov chains). Indeed, such fuzzy-based procedures seem to be a good answer, e.g., for astronomical observations when the patterns present diffuse structures. Moreover, these approaches allow us to process missing data in one or several spectral bands which correspond to specific situations in astronomy. To validate both models, we perform and compare the segmentation on synthetic images and raw multispectral astronomical data.
Aquino, Arturo; Gegundez-Arias, Manuel Emilio; Marin, Diego
2010-11-01
Optic disc (OD) detection is an important step in developing systems for automated diagnosis of various serious ophthalmic pathologies. This paper presents a new template-based methodology for segmenting the OD from digital retinal images. This methodology uses morphological and edge detection techniques followed by the Circular Hough Transform to obtain a circular OD boundary approximation. It requires a pixel located within the OD as initial information. For this purpose, a location methodology based on a voting-type algorithm is also proposed. The algorithms were evaluated on the 1200 images of the publicly available MESSIDOR database. The location procedure succeeded in 99% of cases, taking an average computational time of 1.67 s. with a standard deviation of 0.14 s. On the other hand, the segmentation algorithm rendered an average common area overlapping between automated segmentations and true OD regions of 86%. The average computational time was 5.69 s with a standard deviation of 0.54 s. Moreover, a discussion on advantages and disadvantages of the models more generally used for OD segmentation is also presented in this paper.
Vesselness propagation: a fast interactive vessel segmentation method
NASA Astrophysics Data System (ADS)
Cai, Wenli; Dachille, Frank; Harris, Gordon J.; Yoshida, Hiroyuki
2006-03-01
With the rapid development of multi-detector computed tomography (MDCT), resulting in increasing temporal and spatial resolution of data sets, clinical use of computed tomographic angiography (CTA) is rapidly increasing. Analysis of vascular structures is much needed in CTA images; however, the basis of the analysis, vessel segmentation, can still be a challenging problem. In this paper, we present a fast interactive method for CTA vessel segmentation, called vesselness propagation. This method is a two-step procedure, with a pre-processing step and an interactive step. During the pre-processing step, a vesselness volume is computed by application of a CTA transfer function followed by a multi-scale Hessian filtering. At the interactive stage, the propagation is controlled interactively in terms of the priority of the vesselness. This method was used successfully in many CTA applications such as the carotid artery, coronary artery, and peripheral arteries. It takes less than one minute for a user to segment the entire vascular structure. Thus, the proposed method provides an effective way of obtaining an overview of vascular structures.
SU-C-207B-03: A Geometrical Constrained Chan-Vese Based Tumor Segmentation Scheme for PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, L; Zhou, Z; Wang, J
Purpose: Accurate segmentation of tumor in PET is challenging when part of tumor is connected with normal organs/tissues with no difference in intensity. Conventional segmentation methods, such as thresholding or region growing, cannot generate satisfactory results in this case. We proposed a geometrical constrained Chan-Vese based scheme to segment tumor in PET for this special case by considering the similarity between two adjacent slices. Methods: The proposed scheme performs segmentation in a slice-by-slice fashion where an accurate segmentation of one slice is used as the guidance for segmentation of rest slices. For a slice that the tumor is not directlymore » connected to organs/tissues with similar intensity values, a conventional clustering-based segmentation method under user’s guidance is used to obtain an exact tumor contour. This is set as the initial contour and the Chan-Vese algorithm is applied for segmenting the tumor in the next adjacent slice by adding constraints of tumor size, position and shape information. This procedure is repeated until the last slice of PET containing tumor. The proposed geometrical constrained Chan-Vese based algorithm was implemented in Matlab and its performance was tested on several cervical cancer patients where cervix and bladder are connected with similar activity values. The positive predictive values (PPV) are calculated to characterize the segmentation accuracy of the proposed scheme. Results: Tumors were accurately segmented by the proposed method even when they are connected with bladder in the image with no difference in intensity. The average PPVs were 0.9571±0.0355 and 0.9894±0.0271 for 17 slices and 11 slices of PET from two patients, respectively. Conclusion: We have developed a new scheme to segment tumor in PET images for the special case that the tumor is quite similar to or connected to normal organs/tissues in the image. The proposed scheme can provide a reliable way for segmenting tumors.« less
Prosthetic component segmentation with blur compensation: a fast method for 3D fluoroscopy.
Tarroni, Giacomo; Tersi, Luca; Corsi, Cristiana; Stagni, Rita
2012-06-01
A new method for prosthetic component segmentation from fluoroscopic images is presented. The hybrid approach we propose combines diffusion filtering, region growing and level-set techniques without exploiting any a priori knowledge of the analyzed geometry. The method was evaluated on a synthetic dataset including 270 images of knee and hip prosthesis merged to real fluoroscopic data simulating different conditions of blurring and illumination gradient. The performance of the method was assessed by comparing estimated contours to references using different metrics. Results showed that the segmentation procedure is fast, accurate, independent on the operator as well as on the specific geometrical characteristics of the prosthetic component, and able to compensate for amount of blurring and illumination gradient. Importantly, the method allows a strong reduction of required user interaction time when compared to traditional segmentation techniques. Its effectiveness and robustness in different image conditions, together with simplicity and fast implementation, make this prosthetic component segmentation procedure promising and suitable for multiple clinical applications including assessment of in vivo joint kinematics in a variety of cases.
A univocal definition of the neuronal soma morphology using Gaussian mixture models.
Luengo-Sanchez, Sergio; Bielza, Concha; Benavides-Piccione, Ruth; Fernaud-Espinosa, Isabel; DeFelipe, Javier; Larrañaga, Pedro
2015-01-01
The definition of the soma is fuzzy, as there is no clear line demarcating the soma of the labeled neurons and the origin of the dendrites and axon. Thus, the morphometric analysis of the neuronal soma is highly subjective. In this paper, we provide a mathematical definition and an automatic segmentation method to delimit the neuronal soma. We applied this method to the characterization of pyramidal cells, which are the most abundant neurons in the cerebral cortex. Since there are no benchmarks with which to compare the proposed procedure, we validated the goodness of this automatic segmentation method against manual segmentation by neuroanatomists to set up a framework for comparison. We concluded that there were no significant differences between automatically and manually segmented somata, i.e., the proposed procedure segments the neurons similarly to how a neuroanatomist does. It also provides univocal, justifiable and objective cutoffs. Thus, this study is a means of characterizing pyramidal neurons in order to objectively compare the morphometry of the somata of these neurons in different cortical areas and species.
Using data mining to segment healthcare markets from patients' preference perspectives.
Liu, Sandra S; Chen, Jie
2009-01-01
This paper aims to provide an example of how to use data mining techniques to identify patient segments regarding preferences for healthcare attributes and their demographic characteristics. Data were derived from a number of individuals who received in-patient care at a health network in 2006. Data mining and conventional hierarchical clustering with average linkage and Pearson correlation procedures are employed and compared to show how each procedure best determines segmentation variables. Data mining tools identified three differentiable segments by means of cluster analysis. These three clusters have significantly different demographic profiles. The study reveals, when compared with traditional statistical methods, that data mining provides an efficient and effective tool for market segmentation. When there are numerous cluster variables involved, researchers and practitioners need to incorporate factor analysis for reducing variables to clearly and meaningfully understand clusters. Interests and applications in data mining are increasing in many businesses. However, this technology is seldom applied to healthcare customer experience management. The paper shows that efficient and effective application of data mining methods can aid the understanding of patient healthcare preferences.
3D marker-controlled watershed for kidney segmentation in clinical CT exams.
Wieclawek, Wojciech
2018-02-27
Image segmentation is an essential and non trivial task in computer vision and medical image analysis. Computed tomography (CT) is one of the most accessible medical examination techniques to visualize the interior of a patient's body. Among different computer-aided diagnostic systems, the applications dedicated to kidney segmentation represent a relatively small group. In addition, literature solutions are verified on relatively small databases. The goal of this research is to develop a novel algorithm for fully automated kidney segmentation. This approach is designed for large database analysis including both physiological and pathological cases. This study presents a 3D marker-controlled watershed transform developed and employed for fully automated CT kidney segmentation. The original and the most complex step in the current proposition is an automatic generation of 3D marker images. The final kidney segmentation step is an analysis of the labelled image obtained from marker-controlled watershed transform. It consists of morphological operations and shape analysis. The implementation is conducted in a MATLAB environment, Version 2017a, using i.a. Image Processing Toolbox. 170 clinical CT abdominal studies have been subjected to the analysis. The dataset includes normal as well as various pathological cases (agenesis, renal cysts, tumors, renal cell carcinoma, kidney cirrhosis, partial or radical nephrectomy, hematoma and nephrolithiasis). Manual and semi-automated delineations have been used as a gold standard. Wieclawek Among 67 delineated medical cases, 62 cases are 'Very good', whereas only 5 are 'Good' according to Cohen's Kappa interpretation. The segmentation results show that mean values of Sensitivity, Specificity, Dice, Jaccard, Cohen's Kappa and Accuracy are 90.29, 99.96, 91.68, 85.04, 91.62 and 99.89% respectively. All 170 medical cases (with and without outlines) have been classified by three independent medical experts as 'Very good' in 143-148 cases, as 'Good' in 15-21 cases and as 'Moderate' in 6-8 cases. An automatic kidney segmentation approach for CT studies to compete with commonly known solutions was developed. The algorithm gives promising results, that were confirmed during validation procedure done on a relatively large database, including 170 CTs with both physiological and pathological cases.
Segmentation and analysis of mouse pituitary cells with graphic user interface (GUI)
NASA Astrophysics Data System (ADS)
González, Erika; Medina, Lucía.; Hautefeuille, Mathieu; Fiordelisio, Tatiana
2018-02-01
In this work we present a method to perform pituitary cell segmentation in image stacks acquired by fluorescence microscopy from pituitary slice preparations. Although there exist many procedures developed to achieve cell segmentation tasks, they are generally based on the edge detection and require high resolution images. However in the biological preparations that we worked on, the cells are not well defined as experts identify their intracellular calcium activity due to fluorescence intensity changes in different regions over time. This intensity changes were associated with time series over regions, and because they present a particular behavior they were used into a classification procedure in order to perform cell segmentation. Two logistic regression classifiers were implemented for the time series classification task using as features the area under the curve and skewness in the first classifier and skewness and kurtosis in the second classifier. Once we have found both decision boundaries in two different feature spaces by training using 120 time series, the decision boundaries were tested over 12 image stacks through a python graphical user interface (GUI), generating binary images where white pixels correspond to cells and the black ones to background. Results show that area-skewness classifier reduces the time an expert dedicates in locating cells by up to 75% in some stacks versus a 92% for the kurtosis-skewness classifier, this evaluated on the number of regions the method found. Due to the promising results, we expect that this method will be improved adding more relevant features to the classifier.
An analog scrambler for speech based on sequential permutations in time and frequency
NASA Astrophysics Data System (ADS)
Cox, R. V.; Jayant, N. S.; McDermott, B. J.
Permutation of speech segments is an operation that is frequently used in the design of scramblers for analog speech privacy. In this paper, a sequential procedure for segment permutation is considered. This procedure can be extended to two dimensional permutation of time segments and frequency bands. By subjective testing it is shown that this combination gives a residual intelligibility for spoken digits of 20 percent with a delay of 256 ms. (A lower bound for this test would be 10 percent). The complexity of implementing such a system is considered and the issues of synchronization and channel equalization are addressed. The computer simulation results for the system using both real and simulated channels are examined.
36 CFR 223.195 - Procedures for identifying and marking unprocessed timber.
Code of Federal Regulations, 2013 CFR
2013-07-01
... pattern may not be used to mark logs from any other source for a period of 24 months after all logs have..., they shall be replaced. If the log is cut into two or more segments, each segment shall be identified... preserve identification of log pieces shall not apply to logs cut into two or more segments as a part of...
Surgeon and type of anesthesia predict variability in surgical procedure times.
Strum, D P; Sampson, A R; May, J H; Vargas, L G
2000-05-01
Variability in surgical procedure times increases the cost of healthcare delivery by increasing both the underutilization and overutilization of expensive surgical resources. To reduce variability in surgical procedure times, we must identify and study its sources. Our data set consisted of all surgeries performed over a 7-yr period at a large teaching hospital, resulting in 46,322 surgical cases. To study factors associated with variability in surgical procedure times, data mining techniques were used to segment and focus the data so that the analyses would be both technically and intellectually feasible. The data were subdivided into 40 representative segments of manageable size and variability based on headers adopted from the common procedural terminology classification. Each data segment was then analyzed using a main-effects linear model to identify and quantify specific sources of variability in surgical procedure times. The single most important source of variability in surgical procedure times was surgeon effect. Type of anesthesia, age, gender, and American Society of Anesthesiologists risk class were additional sources of variability. Intrinsic case-specific variability, unexplained by any of the preceding factors, was found to be highest for shorter surgeries relative to longer procedures. Variability in procedure times among surgeons was a multiplicative function (proportionate to time) of surgical time and total procedure time, such that as procedure times increased, variability in surgeons' surgical time increased proportionately. Surgeon-specific variability should be considered when building scheduling heuristics for longer surgeries. Results concerning variability in surgical procedure times due to factors such as type of anesthesia, age, gender, and American Society of Anesthesiologists risk class may be extrapolated to scheduling in other institutions, although specifics on individual surgeons may not. This research identifies factors associated with variability in surgical procedure times, knowledge of which may ultimately be used to improve surgical scheduling and operating room utilization.
Harima, Hirofumi; Hamabe, Kouichi; Hisano, Fusako; Matsuzaki, Yuko; Itoh, Tadahiko; Sanuki, Kazutoshi; Sakaida, Isao
2018-05-23
An 89-year-old man was referred to our hospital for treatment of hepatolithiasis causing recurrent cholangitis. He had undergone a prior Whipple procedure. Computed tomography demonstrated left-sided hepatolithiasis. First, we conducted peroral direct cholangioscopy (PDCS) using an ultraslim endoscope. Although PDCS was successfully conducted, it was unsuccessful in removing all the stones. The stones located in the B2 segment were difficult to remove because the endoscope could not be inserted deeply into this segment due to the small size of the intrahepatic bile duct. Next, we substituted the endoscope with an upper gastrointestinal endoscope. After positioning the endoscope, the SpyGlass digital system (SPY-DS) was successfully inserted deep into the B2 segment. Upon visualizing the residual stones, we conducted SPY-DS-guided electrohydraulic lithotripsy. The stones were disintegrated and completely removed. In cases of PDCS failure, a treatment strategy using the SPY-DS can be considered for patients with hepatolithiasis after a Whipple procedure.
A multiscale Markov random field model in wavelet domain for image segmentation
NASA Astrophysics Data System (ADS)
Dai, Peng; Cheng, Yu; Wang, Shengchun; Du, Xinyu; Wu, Dan
2017-07-01
The human vision system has abilities for feature detection, learning and selective attention with some properties of hierarchy and bidirectional connection in the form of neural population. In this paper, a multiscale Markov random field model in the wavelet domain is proposed by mimicking some image processing functions of vision system. For an input scene, our model provides its sparse representations using wavelet transforms and extracts its topological organization using MRF. In addition, the hierarchy property of vision system is simulated using a pyramid framework in our model. There are two information flows in our model, i.e., a bottom-up procedure to extract input features and a top-down procedure to provide feedback controls. The two procedures are controlled simply by two pyramidal parameters, and some Gestalt laws are also integrated implicitly. Equipped with such biological inspired properties, our model can be used to accomplish different image segmentation tasks, such as edge detection and region segmentation.
Precise Alignment and Permanent Mounting of Thin and Lightweight X-ray Segments
NASA Technical Reports Server (NTRS)
Biskach, Michael P.; Chan, Kai-Wing; Hong, Melinda N.; Mazzarella, James R.; McClelland, Ryan S.; Norman, Michael J.; Saha, Timo T.; Zhang, William W.
2012-01-01
To provide observations to support current research efforts in high energy astrophysics. future X-ray telescope designs must provide matching or better angular resolution while significantly increasing the total collecting area. In such a design the permanent mounting of thin and lightweight segments is critical to the overall performance of the complete X-ray optic assembly. The thin and lightweight segments used in the assemhly of the modules are desigued to maintain and/or exceed the resolution of existing X-ray telescopes while providing a substantial increase in collecting area. Such thin and delicate X-ray segments are easily distorted and yet must be aligned to the arcsecond level and retain accurate alignment for many years. The Next Generation X-ray Optic (NGXO) group at NASA Goddard Space Flight Center has designed, assembled. and implemented new hardware and procedures mth the short term goal of aligning three pairs of X-ray segments in a technology demonstration module while maintaining 10 arcsec alignment through environmental testing as part of the eventual design and construction of a full sized module capable of housing hundreds of X-ray segments. The recent attempts at multiple segment pair alignment and permanent mounting is described along with an overview of the procedure used. A look into what the next year mll bring for the alignment and permanent segment mounting effort illustrates some of the challenges left to overcome before an attempt to populate a full sized module can begin.
Automated 3D renal segmentation based on image partitioning
NASA Astrophysics Data System (ADS)
Yeghiazaryan, Varduhi; Voiculescu, Irina D.
2016-03-01
Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.
Adaptive segmentation of cerebrovascular tree in time-of-flight magnetic resonance angiography.
Hao, J T; Li, M L; Tang, F L
2008-01-01
Accurate segmentation of the human vasculature is an important prerequisite for a number of clinical procedures, such as diagnosis, image-guided neurosurgery and pre-surgical planning. In this paper, an improved statistical approach to extracting whole cerebrovascular tree in time-of-flight magnetic resonance angiography is proposed. Firstly, in order to get a more accurate segmentation result, a localized observation model is proposed instead of defining the observation model over the entire dataset. Secondly, for the binary segmentation, an improved Iterative Conditional Model (ICM) algorithm is presented to accelerate the segmentation process. The experimental results showed that the proposed algorithm can obtain more satisfactory segmentation results and save more processing time than conventional approaches, simultaneously.
Naseri, H; Homaeinezhad, M R; Pourkhajeh, H
2013-09-01
The major aim of this study is to describe a unified procedure for detecting noisy segments and spikes in transduced signals with a cyclic but non-stationary periodic nature. According to this procedure, the cycles of the signal (onset and offset locations) are detected. Then, the cycles are clustered into a finite number of groups based on appropriate geometrical- and frequency-based time series. Next, the median template of each time series of each cluster is calculated. Afterwards, a correlation-based technique is devised for making a comparison between a test cycle feature and the associated time series of each cluster. Finally, by applying a suitably chosen threshold for the calculated correlation values, a segment is prescribed to be either clean or noisy. As a key merit of this research, the procedure can introduce a decision support for choosing accurately orthogonal-expansion-based filtering or to remove noisy segments. In this paper, the application procedure of the proposed method is comprehensively described by applying it to phonocardiogram (PCG) signals for finding noisy cycles. The database consists of 126 records from several patients of a domestic research station acquired by a 3M Littmann(®) 3200, 4KHz sampling frequency electronic stethoscope. By implementing the noisy segments detection algorithm with this database, a sensitivity of Se=91.41% and a positive predictive value, PPV=92.86% were obtained based on physicians assessments. Copyright © 2013 Elsevier Ltd. All rights reserved.
Effects of CT resolution and radiodensity threshold on the CFD evaluation of nasal airflow.
Quadrio, Maurizio; Pipolo, Carlotta; Corti, Stefano; Messina, Francesco; Pesci, Chiara; Saibene, Alberto M; Zampini, Samuele; Felisati, Giovanni
2016-03-01
The article focuses on the robustness of a CFD-based procedure for the quantitative evaluation of the nasal airflow. CFD ability to yield robust results with respect to the unavoidable procedural and modeling inaccuracies must be demonstrated to allow this tool to become part of the clinical practice in this field. The present article specifically addresses the sensitivity of the CFD procedure to the spatial resolution of the available CT scans, as well as to the choice of the segmentation level of the CT images. We found no critical problems concerning these issues; nevertheless, the choice of the segmentation level is potentially delicate if carried out by an untrained operator.
Bansal, Ankur; Sinha, Rahul Janak; Jhanwar, Ankur; Prakash, Gaurav; Purkait, Bimalesh; Singh, Vishwajeet
2017-01-01
Objective The incidence of ureteral stricture is showing a rising trend due to increased use of laparoscopic and upper urinary tract endoscopic procedures. Boari flap is the preferred method of repairing long- segment ureteral defects of 8–12 cm. The procedure has undergone change from classical open (transperitoneal and retroperitoneal) method to laparoscopic surgery and recently robotic surgery. Laparoscopic approach is cosmetically appealing, less morbid and with shorter hospital stay. In this case series, we report our experience of performing laparoscopic ureteral reimplantation with Boari flap in 3 patients. Material and methods This prospective study was conducted between January 2011 December 2014. The patients with a long- segment ureteral defect who had undergone laparoscopic Boari flap reconstruction were included in the study. Outcome of laparoscopic ureteral reimplantation with Boari flap for the manangement of long segment ureteral defect was evaluated. Results The procedure was performed on 3 patients, and male to female ratio was 1:2. One patient had bilateral and other two patient had left ureteral stricture. The mean length of ureteral stricture was 8.6 cm (range 8.2–9.2 cm). The mean operative time was 206 min (190 to 220 min). The average estimated blood loss was 100 mL (range 90–110 mL) and mean hospital stay was 6 days (range 5 to 7 days). The mean follow up was 19 months (range 17–22 months). None of the patients experienced any complication related to the procedure in perioperative period. Conclusion Laparoscopic ureteral reimplantation with Boari flap is safe, feasible and has excellent long term results. However, the procedure is technically challenging, requires extensive experience of intracorporeal suturing. PMID:28861304
Bansal, Ankur; Sinha, Rahul Janak; Jhanwar, Ankur; Prakash, Gaurav; Purkait, Bimalesh; Singh, Vishwajeet
2017-09-01
The incidence of ureteral stricture is showing a rising trend due to increased use of laparoscopic and upper urinary tract endoscopic procedures. Boari flap is the preferred method of repairing long- segment ureteral defects of 8-12 cm. The procedure has undergone change from classical open (transperitoneal and retroperitoneal) method to laparoscopic surgery and recently robotic surgery. Laparoscopic approach is cosmetically appealing, less morbid and with shorter hospital stay. In this case series, we report our experience of performing laparoscopic ureteral reimplantation with Boari flap in 3 patients. This prospective study was conducted between January 2011 December 2014. The patients with a long- segment ureteral defect who had undergone laparoscopic Boari flap reconstruction were included in the study. Outcome of laparoscopic ureteral reimplantation with Boari flap for the manangement of long segment ureteral defect was evaluated. The procedure was performed on 3 patients, and male to female ratio was 1:2. One patient had bilateral and other two patient had left ureteral stricture. The mean length of ureteral stricture was 8.6 cm (range 8.2-9.2 cm). The mean operative time was 206 min (190 to 220 min). The average estimated blood loss was 100 mL (range 90-110 mL) and mean hospital stay was 6 days (range 5 to 7 days). The mean follow up was 19 months (range 17-22 months). None of the patients experienced any complication related to the procedure in perioperative period. Laparoscopic ureteral reimplantation with Boari flap is safe, feasible and has excellent long term results. However, the procedure is technically challenging, requires extensive experience of intracorporeal suturing.
Efficient segmentation of 3D fluoroscopic datasets from mobile C-arm
NASA Astrophysics Data System (ADS)
Styner, Martin A.; Talib, Haydar; Singh, Digvijay; Nolte, Lutz-Peter
2004-05-01
The emerging mobile fluoroscopic 3D technology linked with a navigation system combines the advantages of CT-based and C-arm-based navigation. The intra-operative, automatic segmentation of 3D fluoroscopy datasets enables the combined visualization of surgical instruments and anatomical structures for enhanced planning, surgical eye-navigation and landmark digitization. We performed a thorough evaluation of several segmentation algorithms using a large set of data from different anatomical regions and man-made phantom objects. The analyzed segmentation methods include automatic thresholding, morphological operations, an adapted region growing method and an implicit 3D geodesic snake method. In regard to computational efficiency, all methods performed within acceptable limits on a standard Desktop PC (30sec-5min). In general, the best results were obtained with datasets from long bones, followed by extremities. The segmentations of spine, pelvis and shoulder datasets were generally of poorer quality. As expected, the threshold-based methods produced the worst results. The combined thresholding and morphological operations methods were considered appropriate for a smaller set of clean images. The region growing method performed generally much better in regard to computational efficiency and segmentation correctness, especially for datasets of joints, and lumbar and cervical spine regions. The less efficient implicit snake method was able to additionally remove wrongly segmented skin tissue regions. This study presents a step towards efficient intra-operative segmentation of 3D fluoroscopy datasets, but there is room for improvement. Next, we plan to study model-based approaches for datasets from the knee and hip joint region, which would be thenceforth applied to all anatomical regions in our continuing development of an ideal segmentation procedure for 3D fluoroscopic images.
Morphology-based three-dimensional segmentation of coronary artery tree from CTA scans
NASA Astrophysics Data System (ADS)
Banh, Diem Phuc T.; Kyprianou, Iacovos S.; Paquerault, Sophie; Myers, Kyle J.
2007-03-01
We developed an algorithm based on a rule-based threshold framework to segment the coronary arteries from angiographic computed tomography (CTA) data. Computerized segmentation of the coronary arteries is a challenging procedure due to the presence of diverse anatomical structures surrounding the heart on cardiac CTA data. The proposed algorithm incorporates various levels of image processing and organ information including region, connectivity and morphology operations. It consists of three successive stages. The first stage involves the extraction of the three-dimensional scaffold of the heart envelope. This stage is semiautomatic requiring a reader to review the CTA scans and manually select points along the heart envelope in slices. These points are further processed using a surface spline-fitting technique to automatically generate the heart envelope. The second stage consists of segmenting the left heart chambers and coronary arteries using grayscale threshold, size and connectivity criteria. This is followed by applying morphology operations to further detach the left and right coronary arteries from the aorta. In the final stage, the 3D vessel tree is reconstructed and labeled using an Isolated Connected Threshold technique. The algorithm was developed and tested on a patient coronary artery CTA that was graciously shared by the Department of Radiology of the Massachusetts General Hospital. The test showed that our method constantly segmented the vessels above 79% of the maximum gray-level and automatically extracted 55 of the 58 coronary segments that can be seen on the CTA scan by a reader. These results are an encouraging step toward our objective of generating high resolution models of the male and female heart that will be subsequently used as phantoms for medical imaging system optimization studies.
Golbaz, Isabelle; Ahlers, Christian; Goesseringer, Nina; Stock, Geraldine; Geitzenauer, Wolfgang; Prünte, Christian; Schmidt-Erfurth, Ursula Margarethe
2011-03-01
This study compared automatic- and manual segmentation modalities in the retina of healthy eyes using high-definition optical coherence tomography (HD-OCT). Twenty retinas in 20 healthy individuals were examined using an HD-OCT system (Carl Zeiss Meditec, Inc.). Three-dimensional imaging was performed with an axial resolution of 6 μm at a maximum scanning speed of 25,000 A-scans/second. Volumes of 6 × 6 × 2 mm were scanned. Scans were analysed using a matlab-based algorithm and a manual segmentation software system (3D-Doctor). The volume values calculated by the two methods were compared. Statistical analysis revealed a high correlation between automatic and manual modes of segmentation. The automatic mode of measuring retinal volume and the corresponding three-dimensional images provided similar results to the manual segmentation procedure. Both methods were able to visualize retinal and subretinal features accurately. This study compared two methods of assessing retinal volume using HD-OCT scans in healthy retinas. Both methods were able to provide realistic volumetric data when applied to raster scan sets. Manual segmentation methods represent an adequate tool with which to control automated processes and to identify clinically relevant structures, whereas automatic procedures will be needed to obtain data in larger patient populations. © 2009 The Authors. Journal compilation © 2009 Acta Ophthalmol.
Automated methods for hippocampus segmentation: the evolution and a review of the state of the art.
Dill, Vanderson; Franco, Alexandre Rosa; Pinho, Márcio Sarroglia
2015-04-01
The segmentation of the hippocampus in Magnetic Resonance Imaging (MRI) has been an important procedure to diagnose and monitor several clinical situations. The precise delineation of the borders of this brain structure makes it possible to obtain a measure of the volume and estimate its shape, which can be used to diagnose some diseases, such as Alzheimer's disease, schizophrenia and epilepsy. As the manual segmentation procedure in three-dimensional images is highly time consuming and the reproducibility is low, automated methods introduce substantial gains. On the other hand, the implementation of those methods is a challenge because of the low contrast of this structure in relation to the neighboring areas of the brain. Within this context, this research presents a review of the evolution of automatized methods for the segmentation of the hippocampus in MRI. Many proposed methods for segmentation of the hippocampus have been published in leading journals in the medical image processing area. This paper describes these methods presenting the techniques used and quantitatively comparing the methods based on Dice Similarity Coefficient. Finally, we present an evaluation of those methods considering the degree of user intervention, computational cost, segmentation accuracy and feasibility of application in a clinical routine.
Accurate determination of segmented X-ray detector geometry
Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; ...
2015-10-22
Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical formore » many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. Furthermore, we show that the refined detector geometry greatly improves the results of experiments.« less
Hamraz, Hamid; Contreras, Marco A; Zhang, Jun
2017-07-28
Airborne laser scanning (LiDAR) point clouds over large forested areas can be processed to segment individual trees and subsequently extract tree-level information. Existing segmentation procedures typically detect more than 90% of overstory trees, yet they barely detect 60% of understory trees because of the occlusion effect of higher canopy layers. Although understory trees provide limited financial value, they are an essential component of ecosystem functioning by offering habitat for numerous wildlife species and influencing stand development. Here we model the occlusion effect in terms of point density. We estimate the fractions of points representing different canopy layers (one overstory and multiple understory) and also pinpoint the required density for reasonable tree segmentation (where accuracy plateaus). We show that at a density of ~170 pt/m² understory trees can likely be segmented as accurately as overstory trees. Given the advancements of LiDAR sensor technology, point clouds will affordably reach this required density. Using modern computational approaches for big data, the denser point clouds can efficiently be processed to ultimately allow accurate remote quantification of forest resources. The methodology can also be adopted for other similar remote sensing or advanced imaging applications such as geological subsurface modelling or biomedical tissue analysis.
Estimation procedure of the efficiency of the heat network segment
NASA Astrophysics Data System (ADS)
Polivoda, F. A.; Sokolovskii, R. I.; Vladimirov, M. A.; Shcherbakov, V. P.; Shatrov, L. A.
2017-07-01
An extensive city heat network contains many segments, and each segment operates with different efficiency of heat energy transfer. This work proposes an original technical approach; it involves the evaluation of the energy efficiency function of the heat network segment and interpreting of two hyperbolic functions in the form of the transcendental equation. In point of fact, the problem of the efficiency change of the heat network depending on the ambient temperature was studied. Criteria dependences used for evaluation of the set segment efficiency of the heat network and finding of the parameters for the most optimal control of the heat supply process of the remote users were inferred with the help of the functional analysis methods. Generally, the efficiency function of the heat network segment is interpreted by the multidimensional surface, which allows illustrating it graphically. It was shown that the solution of the inverse problem is possible as well. Required consumption of the heating agent and its temperature may be found by the set segment efficient and ambient temperature; requirements to heat insulation and pipe diameters may be formulated as well. Calculation results were received in a strict analytical form, which allows investigating the found functional dependences for availability of the extremums (maximums) under the set external parameters. A conclusion was made that it is expedient to apply this calculation procedure in two practically important cases: for the already made (built) network, when the change of the heat agent consumption and temperatures in the pipe is only possible, and for the projecting (under construction) network, when introduction of changes into the material parameters of the network is possible. This procedure allows clarifying diameter and length of the pipes, types of insulation, etc. Length of the pipes may be considered as the independent parameter for calculations; optimization of this parameter is made in accordance with other, economical, criteria for the specific project.
Development of tailorable advanced blanket insulation for advanced space transportation systems
NASA Technical Reports Server (NTRS)
Calamito, Dominic P.
1987-01-01
Two items of Tailorable Advanced Blanket Insulation (TABI) for Advanced Space Transportation Systems were produced. The first consisted of flat panels made from integrally woven, 3-D fluted core having parallel fabric faces and connecting ribs of Nicalon silicon carbide yarns. The triangular cross section of the flutes were filled with mandrels of processed Q-Fiber Felt. Forty panels were prepared with only minimal problems, mostly resulting from the unavailability of insulation with the proper density. Rigidizing the fluted fabric prior to inserting the insulation reduced the production time. The procedures for producing the fabric, insulation mandrels, and TABI panels are described. The second item was an effort to determine the feasibility of producing contoured TABI shapes from gores cut from flat, insulated fluted core panels. Two gores of integrally woven fluted core and single ply fabric (ICAS) were insulated and joined into a large spherical shape employing a tadpole insulator at the mating edges. The fluted core segment of each ICAS consisted of an Astroquartz face fabric and Nicalon face and rib fabrics, while the single ply fabric segment was Nicalon. Further development will be required. The success of fabricating this assembly indicates that this concept may be feasible for certain types of space insulation requirements. The procedures developed for weaving the ICAS, joining the gores, and coating certain areas of the fabrics are presented.
NASA Technical Reports Server (NTRS)
He, X. M.; Craven, B. M.
1993-01-01
For molecular crystals, a procedure is proposed for interpreting experimentally determined atomic mean square anisotropic displacement parameters (ADPs) in terms of the overall molecular vibration together with internal vibrations with the assumption that the molecule consists of a set of linked rigid segments. The internal librations (molecular torsional or bending modes) are described using the variable internal coordinates of the segmented body. With this procedure, the experimental ADPs obtained from crystal structure determinations involving six small molecules (sym-trinitrobenzene, adenosine, tetra-cyanoquinodimethane, benzamide, alpha-cyanoacetic acid hydrazide and N-acetyl-L-tryptophan methylamide) have been analyzed. As a consequence, vibrational corrections to the bond lengths and angles of the molecule are calculated as well as the frequencies and force constants for each internal torsional or bending vibration.
Versatile robotic probe calibration for position tracking in ultrasound imaging.
Bø, Lars Eirik; Hofstad, Erlend Fagertun; Lindseth, Frank; Hernes, Toril A N
2015-05-07
Within the field of ultrasound-guided procedures, there are a number of methods for ultrasound probe calibration. While these methods are usually developed for a specific probe, they are in principle easily adapted to other probes. In practice, however, the adaptation often proves tedious and this is impractical in a research setting, where new probes are tested regularly. Therefore, we developed a method which can be applied to a large variety of probes without adaptation. The method used a robot arm to move a plastic sphere submerged in water through the ultrasound image plane, providing a slow and precise movement. The sphere was then segmented from the recorded ultrasound images using a MATLAB programme and the calibration matrix was computed based on this segmentation in combination with tracking information. The method was tested on three very different probes demonstrating both great versatility and high accuracy.
Versatile robotic probe calibration for position tracking in ultrasound imaging
NASA Astrophysics Data System (ADS)
Eirik Bø, Lars; Fagertun Hofstad, Erlend; Lindseth, Frank; Hernes, Toril A. N.
2015-05-01
Within the field of ultrasound-guided procedures, there are a number of methods for ultrasound probe calibration. While these methods are usually developed for a specific probe, they are in principle easily adapted to other probes. In practice, however, the adaptation often proves tedious and this is impractical in a research setting, where new probes are tested regularly. Therefore, we developed a method which can be applied to a large variety of probes without adaptation. The method used a robot arm to move a plastic sphere submerged in water through the ultrasound image plane, providing a slow and precise movement. The sphere was then segmented from the recorded ultrasound images using a MATLAB programme and the calibration matrix was computed based on this segmentation in combination with tracking information. The method was tested on three very different probes demonstrating both great versatility and high accuracy.
4D-CT motion estimation using deformable image registration and 5D respiratory motion modeling.
Yang, Deshan; Lu, Wei; Low, Daniel A; Deasy, Joseph O; Hope, Andrew J; El Naqa, Issam
2008-10-01
Four-dimensional computed tomography (4D-CT) imaging technology has been developed for radiation therapy to provide tumor and organ images at the different breathing phases. In this work, a procedure is proposed for estimating and modeling the respiratory motion field from acquired 4D-CT imaging data and predicting tissue motion at the different breathing phases. The 4D-CT image data consist of series of multislice CT volume segments acquired in ciné mode. A modified optical flow deformable image registration algorithm is used to compute the image motion from the CT segments to a common full volume 3D-CT reference. This reference volume is reconstructed using the acquired 4D-CT data at the end-of-exhalation phase. The segments are optimally aligned to the reference volume according to a proposed a priori alignment procedure. The registration is applied using a multigrid approach and a feature-preserving image downsampling maxfilter to achieve better computational speed and higher registration accuracy. The registration accuracy is about 1.1 +/- 0.8 mm for the lung region according to our verification using manually selected landmarks and artificially deformed CT volumes. The estimated motion fields are fitted to two 5D (spatial 3D+tidal volume+airflow rate) motion models: forward model and inverse model. The forward model predicts tissue movements and the inverse model predicts CT density changes as a function of tidal volume and airflow rate. A leave-one-out procedure is used to validate these motion models. The estimated modeling prediction errors are about 0.3 mm for the forward model and 0.4 mm for the inverse model.
Self, Wesley H.; Speroff, Theodore; Grijalva, Carlos G.; McNaughton, Candace D.; Ashburn, Jacki; Liu, Dandan; Arbogast, Patrick G.; Russ, Stephan; Storrow, Alan B.; Talbot, Thomas R.
2012-01-01
Objectives Blood culture contamination is a common problem in the emergency department (ED) that leads to unnecessary patient morbidity and health care costs. The study objective was to develop and evaluate the effectiveness of a quality improvement (QI) intervention for reducing blood culture contamination in an ED. Methods The authors developed a QI intervention to reduce blood culture contamination in the ED and then evaluated its effectiveness in a prospective interrupted times series study. The QI intervention involved changing the technique of blood culture specimen collection from the traditional clean procedure, to a new sterile procedure, with standardized use of sterile gloves and a new materials kit containing a 2% chlorhexidine skin antisepsis device, a sterile fenestrated drape, a sterile needle, and a procedural checklist. The intervention was implemented in a university-affiliated ED and its effect on blood culture contamination evaluated by comparing the biweekly percentages of blood cultures contaminated during a 48-week baseline period (clean technique), and 48-week intervention period (sterile technique), using segmented regression analysis with adjustment for secular trends and first-order autocorrelation. The goal was to achieve and maintain a contamination rate below 3%. Results During the baseline period, 321 out of 7,389 (4.3%) cultures were contaminated, compared to 111 of 6,590 (1.7%) during the intervention period (p < 0.001). In the segmented regression model, the intervention was associated with an immediate 2.9% (95% CI = 2.2% to 3.2%) absolute reduction in contamination. The contamination rate was maintained below 3% during each biweekly interval throughout the intervention period. Conclusions A QI assessment of ED blood culture contamination led to development of a targeted intervention to convert the process of blood culture collection from a clean to a fully sterile procedure. Implementation of this intervention led to an immediate and sustained reduction of contamination in an ED with a high baseline contamination rate. PMID:23570482
Segmentation of medical images using explicit anatomical knowledge
NASA Astrophysics Data System (ADS)
Wilson, Laurie S.; Brown, Stephen; Brown, Matthew S.; Young, Jeanne; Li, Rongxin; Luo, Suhuai; Brandt, Lee
1999-07-01
Knowledge-based image segmentation is defined in terms of the separation of image analysis procedures and representation of knowledge. Such architecture is particularly suitable for medical image segmentation, because of the large amount of structured domain knowledge. A general methodology for the application of knowledge-based methods to medical image segmentation is described. This includes frames for knowledge representation, fuzzy logic for anatomical variations, and a strategy for determining the order of segmentation from the modal specification. This method has been applied to three separate problems, 3D thoracic CT, chest X-rays and CT angiography. The application of the same methodology to such a range of applications suggests a major role in medical imaging for segmentation methods incorporating representation of anatomical knowledge.
Characterization of Electrically Active Defects in Si Using CCD Image Sensors
1978-02-01
63 35 Dislocation Segments in CCD Imager . . . . . . . . . . . . . 64 36 422 Reflection Topograph of Dislocation Loop ir... Loops . . . . . 3 39 422 Reflection Topograph of Scratch on CCD Imager, . . . 69 40 Dark Current Display of a CCD Imager with 32 ms integration Time...made of each slice using the elon -asoorbio aold developer described in Appendix D. The inagers were then thinned using the procedure at Appendix taor
Xu, Yue-Min; Qiao, Yong; Sa, Ying-Long; Wu, Den-Long; Zhang, Xin-Ru; Zhang, Jion; Gu, Bao-Jun; Jin, San-Bao
2007-04-01
We evaluated the applications and outcomes of substitution urethroplasty, using a variety of techniques, in 65 patients with complex, long-segment urethral strictures. From January 1995 to December 2005, 65 patients with complex urethral strictures >8cm in length underwent substitution urethroplasty. Of the 65 patients, 43 underwent one-stage urethral reconstruction using mucosal grafts (28 colonic mucosal graft, 12 buccal mucosal graft, and 3 bladder mucosal graft), 17 patients underwent one-stage urethroplasty using pedicle flaps, and 5 patients underwent staged Johanson's urethroplasty. The mean follow-up time was 4.8 yr (range; 0.8-10 yr), with an overall success rate of 76.92% (50 of 65 cases). Complications developed in 15 patients (23.08%) and included recurrent stricture in 7 (10.77%), urethrocutaneous fistula in 3 (4.62%), coloabdominal fistula in 1 (1.54%), penile chordee in 2 (3.08%), and urethral pseudodiverticulum in 2 (3.08%). Recurrent strictures and urethral pseudodiverticulum were treated successfully with a subsequent procedure, including repeat urethroplasty in six cases and urethrotomy or dilation in three. Coloabdominal fistula was corrected only by dressing change; five patients await further reconstruction. Penile skin, colonic mucosal, and buccal mucosal grafts are excellent materials for substitution urethroplasty. Colonic mucosal graft urethroplasty is a feasible procedure for complicated urethral strictures involving the entire or multiple portions of the urethra and the technique may also be considered for urethral reconstruction in patients in whom other conventional procedures failed.
An effective hand vein feature extraction method.
Li, Haigang; Zhang, Qian; Li, Chengdong
2015-01-01
As a new authentication method developed years ago, vein recognition technology features the unique advantage of bioassay. This paper studies the specific procedure for the extraction of hand back vein characteristics. There are different positions used in the collecting process, so that a suitable intravenous regional orientation method is put forward, allowing the positioning area to be the same for all hand positions. In addition, to eliminate the pseudo vein area, the valley regional shape extraction operator can be improved and combined with multiple segmentation algorithms. The images should be segmented step by step, making the vein texture to appear clear and accurate. Lastly, the segmented images should be filtered, eroded, and refined. This process helps to filter the most of the pseudo vein information. Finally, a clear vein skeleton diagram is obtained, demonstrating the effectiveness of the algorithm. This paper presents a hand back vein region location method. This makes it possible to rotate and correct the image by working out the inclination degree of contour at the side of hand back.
2003-09-17
The PF2 segment is an engineering model used to verify the fligh design and the flight manufacturing procedures prior to the start of flight manufacturing. PF2 is also being used to verify the in house operational procedures.
Patterns of Emphysema Heterogeneity
Valipour, Arschang; Shah, Pallav L.; Gesierich, Wolfgang; Eberhardt, Ralf; Snell, Greg; Strange, Charlie; Barry, Robert; Gupta, Avina; Henne, Erik; Bandyopadhyay, Sourish; Raffy, Philippe; Yin, Youbing; Tschirren, Juerg; Herth, Felix J.F.
2016-01-01
Background Although lobar patterns of emphysema heterogeneity are indicative of optimal target sites for lung volume reduction (LVR) strategies, the presence of segmental, or sublobar, heterogeneity is often underappreciated. Objective The aim of this study was to understand lobar and segmental patterns of emphysema heterogeneity, which may more precisely indicate optimal target sites for LVR procedures. Methods Patterns of emphysema heterogeneity were evaluated in a representative cohort of 150 severe (GOLD stage III/IV) chronic obstructive pulmonary disease (COPD) patients from the COPDGene study. High-resolution computerized tomography analysis software was used to measure tissue destruction throughout the lungs to compute heterogeneity (≥ 15% difference in tissue destruction) between (inter-) and within (intra-) lobes for each patient. Emphysema tissue destruction was characterized segmentally to define patterns of heterogeneity. Results Segmental tissue destruction revealed interlobar heterogeneity in the left lung (57%) and right lung (52%). Intralobar heterogeneity was observed in at least one lobe of all patients. No patient presented true homogeneity at a segmental level. There was true homogeneity across both lungs in 3% of the cohort when defining heterogeneity as ≥ 30% difference in tissue destruction. Conclusion Many LVR technologies for treatment of emphysema have focused on interlobar heterogeneity and target an entire lobe per procedure. Our observations suggest that a high proportion of patients with emphysema are affected by interlobar as well as intralobar heterogeneity. These findings prompt the need for a segmental approach to LVR in the majority of patients to treat only the most diseased segments and preserve healthier ones. PMID:26430783
The segmentation of bones in pelvic CT images based on extraction of key frames.
Yu, Hui; Wang, Haijun; Shi, Yao; Xu, Ke; Yu, Xuyao; Cao, Yuzhen
2018-05-22
Bone segmentation is important in computed tomography (CT) imaging of the pelvis, which assists physicians in the early diagnosis of pelvic injury, in planning operations, and in evaluating the effects of surgical treatment. This study developed a new algorithm for the accurate, fast, and efficient segmentation of the pelvis. The proposed method consists of two main parts: the extraction of key frames and the segmentation of pelvic CT images. Key frames were extracted based on pixel difference, mutual information and normalized correlation coefficient. In the pelvis segmentation phase, skeleton extraction from CT images and a marker-based watershed algorithm were combined to segment the pelvis. To meet the requirements of clinical application, physician's judgment is needed. Therefore the proposed methodology is semi-automated. In this paper, 5 sets of CT data were used to test the overlapping area, and 15 CT images were used to determine the average deviation distance. The average overlapping area of the 5 sets was greater than 94%, and the minimum average deviation distance was approximately 0.58 pixels. In addition, the key frame extraction efficiency and the running time of the proposed method were evaluated on 20 sets of CT data. For each set, approximately 13% of the images were selected as key frames, and the average processing time was approximately 2 min (the time for manual marking was not included). The proposed method is able to achieve accurate, fast, and efficient segmentation of pelvic CT image sequences. Segmentation results not only provide an important reference for early diagnosis and decisions regarding surgical procedures, they also offer more accurate data for medical image registration, recognition and 3D reconstruction.
Zhong, Zhao-Ming; Deviren, Vedat; Tay, Bobby; Burch, Shane; Berven, Sigurd H
2017-05-01
A potential long-term complication of lumbar fusion is the development of adjacent segment disease (ASD), which may necessitate second surgery and adversely affect outcomes. The objective of this is to determine the incidence of ASD following instrumented fusion in adult patients with lumbar spondylolisthesis and to identify the risk factors for this complication. We retrospectively assessed adult patients who had undergone decompression and instrumented fusion for lumbar spondylolisthesis between January 2006 and December 2012. The incidence of ASD was analyzed. Potential risk factors included the patient-related factors, surgery-related factors, and radiographic variables such as sagittal alignment, preexisting disc degeneration and spinal stenosis at the adjacent segment. A total of 154 patients (mean age, 58.4 years) were included. Mean duration of follow-up was 28.6 months. Eighteen patients (11.7%) underwent a reoperation for ASD; 15 patients had reoperation at cranial ASD and 3 at caudal ASD. The simultaneous decompression at adjacent segment (p=0.002) and preexisting spinal stenosis at cranial adjacent segment (p=0.01) were identified as risk factors for ASD. The occurrence of ASD was not affected by patient-related factors, the types, grades and levels of spondylolisthesis, surgical approach, fusion procedures, levels of fusion, number of levels fused, types of bone graft, use of bone morphogenetic proteins, sagittal alignment, preexisting adjacent disc degeneration and preexisting spinal stenosis at caudal adjacent segments. Our findings suggest the overall incidence of ASD is 11.7% in adult patients with lumbar spondylolisthesis after decompression and instrumented fusion at a mean follow-up of 28.6 months, the simultaneous decompression at the adjacent segment and preexisting spinal stenosis at cranial adjacent segment are risk factors for ASD. Copyright © 2017. Published by Elsevier B.V.
Cyanotic Premature Babies: A Videodisc-Based Program
Tinsley, L.R.; Ashton, G.C.; Boychuk, R.B.; Easa, D.J.
1989-01-01
This program for the IBM InfoWindow system is designed to assist medical students and pediatric residents with diagnosis and management of premature infants exhibiting cyanosis. The program consists of six diverse case simulations, with additional information available on diagnosis, procedures, and relevant drugs. Respiratory difficulties accompanied by cyanosis are a common problem in premature infants at or just after birth, but the full diversity of causes is rarely seen in a short training period. The purpose of the program is to assist the student or resident with diagnosis and management of a variety of conditions which they may or may not see during their training. The opening menu permits selection from six cases, covering (1) respiratory distress syndrome proceeding through patent ductus arteriosus to pneumothorax, (2) a congenital heart disorder, (3) sepsis/pneumonia, (4) persistent fetal circulation, (5) diaphragmatic hernia, and (6) tracheo-esophageal fistula. In each case the student is provided with relevant introductory information and must then proceed with diagnosis and management. At each decision point the student may view information about relevant procedures, obtain assistance with diagnosis, or see information about useful drugs. Segments between decision points may be repeated if required. Provision is made for backtracking and review of instructional segments. The program is written in IBM's InfoWindow Presentation System authoring language and the video segments are contained on one side of a standard 12″ laserdisc. The program runs on IBM's InfoWindow System, with the touch screen used to initiate all student actions. The extensive graphics in the program were developed with Storyboard Plus, using the 640×350 resolution mode. This program is one of a number being developed for the Health Sciences Interactive Videodisc Consortium, and was funded in part by IBM Corporation.
NASA Astrophysics Data System (ADS)
Buerger, C.; Lorenz, C.; Babic, D.; Hoppenbrouwers, J.; Homan, R.; Nachabe, R.; Racadio, J. M.; Grass, M.
2017-03-01
Spinal fusion is a common procedure to stabilize the spinal column by fixating parts of the spine. In such procedures, metal screws are inserted through the patients back into a vertebra, and the screws of adjacent vertebrae are connected by metal rods to generate a fixed bridge. In these procedures, 3D image guidance for intervention planning and outcome control is required. Here, for anatomical guidance, an automated approach for vertebra segmentation from C-arm CT images of the spine is introduced and evaluated. As a prerequisite, 3D C-arm CT images are acquired covering the vertebrae of interest. An automatic model-based segmentation approach is applied to delineate the outline of the vertebrae of interest. The segmentation approach is based on 24 partial models of the cervical, thoracic and lumbar vertebrae which aggregate information about (i) the basic shape itself, (ii) trained features for image based adaptation, and (iii) potential shape variations. Since the volume data sets generated by the C-arm system are limited to a certain region of the spine the target vertebra and hence initial model position is assigned interactively. The approach was trained and tested on 21 human cadaver scans. A 3-fold cross validation to ground truth annotations yields overall mean segmentation errors of 0.5 mm for T1 to 1.1 mm for C6. The results are promising and show potential to support the clinician in pedicle screw path and rod planning to allow accurate and reproducible insertions.
UrQt: an efficient software for the Unsupervised Quality trimming of NGS data.
Modolo, Laurent; Lerat, Emmanuelle
2015-04-29
Quality control is a necessary step of any Next Generation Sequencing analysis. Although customary, this step still requires manual interventions to empirically choose tuning parameters according to various quality statistics. Moreover, current quality control procedures that provide a "good quality" data set, are not optimal and discard many informative nucleotides. To address these drawbacks, we present a new quality control method, implemented in UrQt software, for Unsupervised Quality trimming of Next Generation Sequencing reads. Our trimming procedure relies on a well-defined probabilistic framework to detect the best segmentation between two segments of unreliable nucleotides, framing a segment of informative nucleotides. Our software only requires one user-friendly parameter to define the minimal quality threshold (phred score) to consider a nucleotide to be informative, which is independent of both the experiment and the quality of the data. This procedure is implemented in C++ in an efficient and parallelized software with a low memory footprint. We tested the performances of UrQt compared to the best-known trimming programs, on seven RNA and DNA sequencing experiments and demonstrated its optimality in the resulting tradeoff between the number of trimmed nucleotides and the quality objective. By finding the best segmentation to delimit a segment of good quality nucleotides, UrQt greatly increases the number of reads and of nucleotides that can be retained for a given quality objective. UrQt source files, binary executables for different operating systems and documentation are freely available (under the GPLv3) at the following address: https://lbbe.univ-lyon1.fr/-UrQt-.html .
Automatic partitioning of head CTA for enabling segmentation
NASA Astrophysics Data System (ADS)
Suryanarayanan, Srikanth; Mullick, Rakesh; Mallya, Yogish; Kamath, Vidya; Nagaraj, Nithin
2004-05-01
Radiologists perform a CT Angiography procedure to examine vascular structures and associated pathologies such as aneurysms. Volume rendering is used to exploit volumetric capabilities of CT that provides complete interactive 3-D visualization. However, bone forms an occluding structure and must be segmented out. The anatomical complexity of the head creates a major challenge in the segmentation of bone and vessel. An analysis of the head volume reveals varying spatial relationships between vessel and bone that can be separated into three sub-volumes: "proximal", "middle", and "distal". The "proximal" and "distal" sub-volumes contain good spatial separation between bone and vessel (carotid referenced here). Bone and vessel appear contiguous in the "middle" partition that remains the most challenging region for segmentation. The partition algorithm is used to automatically identify these partition locations so that different segmentation methods can be developed for each sub-volume. The partition locations are computed using bone, image entropy, and sinus profiles along with a rule-based method. The algorithm is validated on 21 cases (varying volume sizes, resolution, clinical sites, pathologies) using ground truth identified visually. The algorithm is also computationally efficient, processing a 500+ slice volume in 6 seconds (an impressive 0.01 seconds / slice) that makes it an attractive algorithm for pre-processing large volumes. The partition algorithm is integrated into the segmentation workflow. Fast and simple algorithms are implemented for processing the "proximal" and "distal" partitions. Complex methods are restricted to only the "middle" partition. The partitionenabled segmentation has been successfully tested and results are shown from multiple cases.
Medical image segmentation using genetic algorithms.
Maulik, Ujjwal
2009-03-01
Genetic algorithms (GAs) have been found to be effective in the domain of medical image segmentation, since the problem can often be mapped to one of search in a complex and multimodal landscape. The challenges in medical image segmentation arise due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. The resulting search space is therefore often noisy with a multitude of local optima. Not only does the genetic algorithmic framework prove to be effective in coming out of local optima, it also brings considerable flexibility into the segmentation procedure. In this paper, an attempt has been made to review the major applications of GAs to the domain of medical image segmentation.
Lesion identification using unified segmentation-normalisation models and fuzzy clustering
Seghier, Mohamed L.; Ramlackhansingh, Anil; Crinion, Jenny; Leff, Alexander P.; Price, Cathy J.
2008-01-01
In this paper, we propose a new automated procedure for lesion identification from single images based on the detection of outlier voxels. We demonstrate the utility of this procedure using artificial and real lesions. The scheme rests on two innovations: First, we augment the generative model used for combined segmentation and normalization of images, with an empirical prior for an atypical tissue class, which can be optimised iteratively. Second, we adopt a fuzzy clustering procedure to identify outlier voxels in normalised gray and white matter segments. These two advances suppress misclassification of voxels and restrict lesion identification to gray/white matter lesions respectively. Our analyses show a high sensitivity for detecting and delineating brain lesions with different sizes, locations, and textures. Our approach has important implications for the generation of lesion overlap maps of a given population and the assessment of lesion-deficit mappings. From a clinical perspective, our method should help to compute the total volume of lesion or to trace precisely lesion boundaries that might be pertinent for surgical or diagnostic purposes. PMID:18482850
FLIS Procedures Manual. Document Identifier Code Input/Output Formats (Fixed Length). Volume 8.
1997-04-01
DATA ELE- MENTS. SEGMENT R MAY BE REPEATED A MAXIMUM OF THREE (3) TIMES IN ORDER TO ACQUIRE THE REQUIRED MIX OF SEGMENTS OR INDIVIDUAL DATA ELEMENTS TO...preceding record. Marketing input DICs. QI Next DRN of appropriate segment will be QF The assigned NSN or PSCN being can- reflected in accordance with Table...Classified KFC Notification of Possible Duplicate (Sub- KRP Characteristics Data mitter) Follow-Up Interrogation LFU Notification of Return, SSR Transaction
Finding Acoustic Regularities in Speech: Applications to Phonetic Recognition
1988-12-01
University Press, Indiana, I 1977. [12] N. Chomsky and M. Halle, The Sound Patterns of English, Harper and Row, New York, 1968. l 129 I BIBLIOGRAPHY [13] Y.L...segments are related to the phonemes by a grammar which is determined using. automated procedures operating on a set of training data. Thus important...segments which are described completely in acoustic terms. Next, these acous- tic segments are related to the phonemes by a grammar which is determined
A Method for the Evaluation of Thousands of Automated 3D Stem Cell Segmentations
Bajcsy, Peter; Simon, Mylene; Florczyk, Stephen; Simon, Carl G.; Juba, Derek; Brady, Mary
2016-01-01
There is no segmentation method that performs perfectly with any data set in comparison to human segmentation. Evaluation procedures for segmentation algorithms become critical for their selection. The problems associated with segmentation performance evaluations and visual verification of segmentation results are exaggerated when dealing with thousands of 3D image volumes because of the amount of computation and manual inputs needed. We address the problem of evaluating 3D segmentation performance when segmentation is applied to thousands of confocal microscopy images (z-stacks). Our approach is to incorporate experimental imaging and geometrical criteria, and map them into computationally efficient segmentation algorithms that can be applied to a very large number of z-stacks. This is an alternative approach to considering existing segmentation methods and evaluating most state-of-the-art algorithms. We designed a methodology for 3D segmentation performance characterization that consists of design, evaluation and verification steps. The characterization integrates manual inputs from projected surrogate “ground truth” of statistically representative samples and from visual inspection into the evaluation. The novelty of the methodology lies in (1) designing candidate segmentation algorithms by mapping imaging and geometrical criteria into algorithmic steps, and constructing plausible segmentation algorithms with respect to the order of algorithmic steps and their parameters, (2) evaluating segmentation accuracy using samples drawn from probability distribution estimates of candidate segmentations, and (3) minimizing human labor needed to create surrogate “truth” by approximating z-stack segmentations with 2D contours from three orthogonal z-stack projections and by developing visual verification tools. We demonstrate the methodology by applying it to a dataset of 1253 mesenchymal stem cells. The cells reside on 10 different types of biomaterial scaffolds, and are stained for actin and nucleus yielding 128 460 image frames (on average 125 cells/scaffold × 10 scaffold types × 2 stains × 51 frames/cell). After constructing and evaluating six candidates of 3D segmentation algorithms, the most accurate 3D segmentation algorithm achieved an average precision of 0.82 and an accuracy of 0.84 as measured by the Dice similarity index where values greater than 0.7 indicate a good spatial overlap. A probability of segmentation success was 0.85 based on visual verification, and a computation time was 42.3 h to process all z-stacks. While the most accurate segmentation technique was 4.2 times slower than the second most accurate algorithm, it consumed on average 9.65 times less memory per z-stack segmentation. PMID:26268699
Oguro, Takeo; Fujii, Masatsune; Fuse, Koichi; Takahashi, Minoru; Fujita, Satoru; Kitazawa, Hitoshi; Sato, Masahito; Ikeda, Yoshio; Okabe, Masaaki; Aizawa, Yoshifusa
2015-11-01
Electrical alternans (EA) has not been fully studied in the current percutaneous coronary intervention (PCI) procedure. The purpose of this study was to evaluate visible EA and the morphology of ST segment during PCI. The incidence of visible EA and ST-segment morphology were studied while the coronary artery was occluded for 20 seconds. When data were available, the relationship between EA and blood pressure was analyzed. The clinical and electrocardiographic data were compared with those of the age- and sex-matched controls. During balloon inflation, visible EA was observed in 5 of 306 patients (1.6%) in the last 2 years. EA was limited to PCI in the proximal left anterior descending artery. The ST segment elevated to 10.1 ± 3.2 mm, followed by an alternating QRS complex with a lower ST segment (5.6 ± 1.9 mm; P = .0047) with characteristic ST-segment morphology, which is known as lambda pattern. The mean age of the 5 patients was 68 ± 20 years, and 4(80%). were men. After the release of inflation, the ST-segment level returned rapidly to baseline, followed by normalization of J point. Compared with controls, the maximal elevated ST segment was significantly higher in patients with EA (5.7 ± 2.7 mm; P = .0028). The occlusion of the proximal left anterior descending artery with more severe ischemia seemed to be a prerequisite for developing EA. A higher ST segment was associated with a lower blood pressure and vice versa. A short period of ischemia during PCI may induce visible EA and alternating QRS complexes with a characteristic ST-segment morphology. A higher ST segment was associated with a lower blood pressure and vice versa. Copyright © 2015 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.
Boudissa, M; Orfeuvre, B; Chabanas, M; Tonetti, J
2017-09-01
The Letournel classification of acetabular fracture shows poor reproducibility in inexperienced observers, despite the introduction of 3D imaging. We therefore developed a method of semi-automatic segmentation based on CT data. The present prospective study aimed to assess: (1) whether semi-automatic bone-fragment segmentation increased the rate of correct classification; (2) if so, in which fracture types; and (3) feasibility using the open-source itksnap 3.0 software package without incurring extra cost for users. Semi-automatic segmentation of acetabular fractures significantly increases the rate of correct classification by orthopedic surgery residents. Twelve orthopedic surgery residents classified 23 acetabular fractures. Six used conventional 3D reconstructions provided by the center's radiology department (conventional group) and 6 others used reconstructions obtained by semi-automatic segmentation using the open-source itksnap 3.0 software package (segmentation group). Bone fragments were identified by specific colors. Correct classification rates were compared between groups on Chi 2 test. Assessment was repeated 2 weeks later, to determine intra-observer reproducibility. Correct classification rates were significantly higher in the "segmentation" group: 114/138 (83%) versus 71/138 (52%); P<0.0001. The difference was greater for simple (36/36 (100%) versus 17/36 (47%); P<0.0001) than complex fractures (79/102 (77%) versus 54/102 (53%); P=0.0004). Mean segmentation time per fracture was 27±3min [range, 21-35min]. The segmentation group showed excellent intra-observer correlation coefficients, overall (ICC=0.88), and for simple (ICC=0.92) and complex fractures (ICC=0.84). Semi-automatic segmentation, identifying the various bone fragments, was effective in increasing the rate of correct acetabular fracture classification on the Letournel system by orthopedic surgery residents. It may be considered for routine use in education and training. III: prospective case-control study of a diagnostic procedure. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Procedural key steps in laparoscopic colorectal surgery, consensus through Delphi methodology.
Dijkstra, Frederieke A; Bosker, Robbert J I; Veeger, Nicolaas J G M; van Det, Marc J; Pierie, Jean Pierre E N
2015-09-01
While several procedural training curricula in laparoscopic colorectal surgery have been validated and published, none have focused on dividing surgical procedures into well-identified segments, which can be trained and assessed separately. This enables the surgeon and resident to focus on a specific segment, or combination of segments, of a procedure. Furthermore, it will provide a consistent and uniform method of training for residents rotating through different teaching hospitals. The goal of this study was to determine consensus on the key steps of laparoscopic right hemicolectomy and laparoscopic sigmoid colectomy among experts in our University Medical Center and affiliated hospitals. This will form the basis for the INVEST video-assisted side-by-side training curriculum. The Delphi method was used for determining consensus on key steps of both procedures. A list of 31 steps for laparoscopic right hemicolectomy and 37 steps for laparoscopic sigmoid colectomy was compiled from textbooks and national and international guidelines. In an online questionnaire, 22 experts in 12 hospitals within our teaching region were invited to rate all steps on a Likert scale on importance for the procedure. Consensus was reached in two rounds. Sixteen experts agreed to participate. Of these 16 experts, 14 (88%) completed the questionnaire for both procedures. Of the 14 who completed the first round, 13 (93%) completed the second round. Cronbach's alpha was 0.79 for the right hemicolectomy and 0.91 for the sigmoid colectomy, showing high internal consistency between the experts. For the right hemicolectomy, 25 key steps were established; for the sigmoid colectomy, 24 key steps were established. Expert consensus on the key steps for laparoscopic right hemicolectomy and laparoscopic sigmoid colectomy was reached. These key steps will form the basis for a video-assisted teaching curriculum.
Multifractal-based nuclei segmentation in fish images.
Reljin, Nikola; Slavkovic-Ilic, Marijeta; Tapia, Coya; Cihoric, Nikola; Stankovic, Srdjan
2017-09-01
The method for nuclei segmentation in fluorescence in-situ hybridization (FISH) images, based on the inverse multifractal analysis (IMFA) is proposed. From the blue channel of the FISH image in RGB format, the matrix of Holder exponents, with one-by-one correspondence with the image pixels, is determined first. The following semi-automatic procedure is proposed: initial nuclei segmentation is performed automatically from the matrix of Holder exponents by applying predefined hard thresholding; then the user evaluates the result and is able to refine the segmentation by changing the threshold, if necessary. After successful nuclei segmentation, the HER2 (human epidermal growth factor receptor 2) scoring can be determined in usual way: by counting red and green dots within segmented nuclei, and finding their ratio. The IMFA segmentation method is tested over 100 clinical cases, evaluated by skilled pathologist. Testing results show that the new method has advantages compared to already reported methods.
Missing observations in multiyear rotation sampling designs
NASA Technical Reports Server (NTRS)
Gbur, E. E.; Sielken, R. L., Jr. (Principal Investigator)
1982-01-01
Because Multiyear estimation of at-harvest stratum crop proportions is more efficient than single year estimation, the behavior of multiyear estimators in the presence of missing acquisitions was studied. Only the (worst) case when a segment proportion cannot be estimated for the entire year is considered. The effect of these missing segments on the variance of the at-harvest stratum crop proportion estimator is considered when missing segments are not replaced, and when missing segments are replaced by segments not sampled in previous years. The principle recommendations are to replace missing segments according to some specified strategy, and to use a sequential procedure for selecting a sampling design; i.e., choose an optimal two year design and then, based on the observed two year design after segment losses have been taken into account, choose the best possible three year design having the observed two year parent design.
Elhawary, Haytham; Oguro, Sota; Tuncali, Kemal; Morrison, Paul R.; Tatli, Servet; Shyn, Paul B.; Silverman, Stuart G.; Hata, Nobuhiko
2010-01-01
Rationale and Objectives To develop non-rigid image registration between pre-procedure contrast enhanced MR images and intra-procedure unenhanced CT images, to enhance tumor visualization and localization during CT-guided liver tumor cryoablation procedures. Materials and Methods After IRB approval, a non-rigid registration (NRR) technique was evaluated with different pre-processing steps and algorithm parameters and compared to a standard rigid registration (RR) approach. The Dice Similarity Coefficient (DSC), Target Registration Error (TRE), 95% Hausdorff distance (HD) and total registration time (minutes) were compared using a two-sided Student’s t-test. The entire registration method was then applied during five CT-guided liver cryoablation cases with the intra-procedural CT data transmitted directly from the CT scanner, with both accuracy and registration time evaluated. Results Selected optimal parameters for registration were section thickness of 5mm, cropping the field of view to 66% of its original size, manual segmentation of the liver, B-spline control grid of 5×5×5 and spatial sampling of 50,000 pixels. Mean 95% HD of 3.3mm (2.5x improvement compared to RR, p<0.05); mean DSC metric of 0.97 (13% increase); and mean TRE of 4.1mm (2.7x reduction) were measured. During the cryoablation procedure registration between the pre-procedure MR and the planning intra-procedure CT took a mean time of 10.6 minutes, the MR to targeting CT image took 4 minutes and MR to monitoring CT took 4.3 minutes. Mean registration accuracy was under 3.4mm. Conclusion Non-rigid registration allowed improved visualization of the tumor during interventional planning, targeting and evaluation of tumor coverage by the ice ball. Future work is focused on reducing segmentation time to make the method more clinically acceptable. PMID:20817574
A segmentation approach for a delineation of terrestrial ecoregions
NASA Astrophysics Data System (ADS)
Nowosad, J.; Stepinski, T.
2017-12-01
Terrestrial ecoregions are the result of regionalization of land into homogeneous units of similar ecological and physiographic features. Terrestrial Ecoregions of the World (TEW) is a commonly used global ecoregionalization based on expert knowledge and in situ observations. Ecological Land Units (ELUs) is a global classification of 250 meters-sized cells into 4000 types on the basis of the categorical values of four environmental variables. ELUs are automatically calculated and reproducible but they are not a regionalization which makes them impractical for GIS-based spatial analysis and for comparison with TEW. We have regionalized terrestrial ecosystems on the basis of patterns of the same variables (land cover, soils, landform, and bioclimate) previously used in ELUs. Considering patterns of categorical variables makes segmentation and thus regionalization possible. Original raster datasets of the four variables are first transformed into regular grids of square-sized blocks of their cells called eco-sites. Eco-sites are elementary land units containing local patterns of physiographic characteristics and thus assumed to contain a single ecosystem. Next, eco-sites are locally aggregated using a procedure analogous to image segmentation. The procedure optimizes pattern homogeneity of all four environmental variables within each segment. The result is a regionalization of the landmass into land units characterized by uniform pattern of land cover, soils, landforms, climate, and, by inference, by uniform ecosystem. Because several disjoined segments may have very similar characteristics, we cluster the segments to obtain a smaller set of segment types which we identify with ecoregions. Our approach is automatic, reproducible, updatable, and customizable. It yields the first automatic delineation of ecoregions on the global scale. In the resulting vector database each ecoregion/segment is described by numerous attributes which make it a valuable GIS resource for global ecological and conservation studies.
Maxillary segmental distraction in children with unilateral clefts of lip, palate, and alveolus.
Zemann, Wolfgang; Pichelmayer, Margit
2011-06-01
Alveolar clefts are commonly closed by a bone grafting procedure. In cases of wide clefts the deficiency of soft tissue in the cleft area may lead to wound dehiscence and loss of the bony graft. Segmental maxillary bony transfer has been mentioned to be useful in such cases. Standard distraction devices allow unidirectional movement of the transported segment. Ideally the distraction should strictly follow the dental arch. The aim of this study was to analyze distraction devices that were adapted to the individual clinical situation of the patients. The goal was to achieve a distraction strictly parallel to the dental arch. Six children with unilateral clefts of lip, palate, and alveolus between 12 and 13 years of age were included in the study. The width of the cleft was between 7 and 19 mm. Dental cast models were used to manufacture individual distraction devices that should allow a segmental bony transport strictly parallel to the dental arch. Segmental osteotomy was performed under general anesthesia. Distraction was started 5 days after surgery. All distracters were tooth fixed but supported by palatal inserted orthodontic miniscrews. In all patients, a closure of the alveolar cleft was achieved. Two patients required additional bone grafting after the distraction procedure. The distraction was strictly parallel to the dental arch in all cases. In 1 case a slight cranial displacement of the transported maxillary segment could be noticed, leading to minor modifications of the following distractors. Distraction osteogenesis is a proper method to close wide alveolar clefts. Linear segmental transport is required in the posterior part of the dental arch, whereas in the frontal part the bony transport should run strictly parallel to the dental arch. An exact guided segmental transport may reduce the postoperative orthodontic complexity. Copyright © 2011 Mosby, Inc. All rights reserved.
40 CFR 86.345-79 - Emission calculations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... gasoline-fueled engine test from the pre-test data. Apply the Y value to the K W equation for the entire test. (5) Calculate a separate Y value for each Diesel test segment from the pretest-segment data... New Gasoline-Fueled and Diesel-Fueled Heavy-Duty Engines; Gaseous Exhaust Test Procedures § 86.345-79...
40 CFR 86.345-79 - Emission calculations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... gasoline-fueled engine test from the pre-test data. Apply the Y value to the K W equation for the entire test. (5) Calculate a separate Y value for each Diesel test segment from the pretest-segment data... New Gasoline-Fueled and Diesel-Fueled Heavy-Duty Engines; Gaseous Exhaust Test Procedures § 86.345-79...
40 CFR 86.345-79 - Emission calculations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... gasoline-fueled engine test from the pre-test data. Apply the Y value to the K W equation for the entire test. (5) Calculate a separate Y value for each Diesel test segment from the pretest-segment data... New Gasoline-Fueled and Diesel-Fueled Heavy-Duty Engines; Gaseous Exhaust Test Procedures § 86.345-79...
40 CFR 86.345-79 - Emission calculations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... gasoline-fueled engine test from the pre-test data. Apply the Y value to the K W equation for the entire test. (5) Calculate a separate Y value for each Diesel test segment from the pretest-segment data... New Gasoline-Fueled and Diesel-Fueled Heavy-Duty Engines; Gaseous Exhaust Test Procedures § 86.345-79...
NASA Technical Reports Server (NTRS)
Carrier, Alain C.; Aubrun, Jean-Noel
1993-01-01
New frequency response measurement procedures, on-line modal tuning techniques, and off-line modal identification algorithms are developed and applied to the modal identification of the Advanced Structures/Controls Integrated Experiment (ASCIE), a generic segmented optics telescope test-bed representative of future complex space structures. The frequency response measurement procedure uses all the actuators simultaneously to excite the structure and all the sensors to measure the structural response so that all the transfer functions are measured simultaneously. Structural responses to sinusoidal excitations are measured and analyzed to calculate spectral responses. The spectral responses in turn are analyzed as the spectral data become available and, which is new, the results are used to maintain high quality measurements. Data acquisition, processing, and checking procedures are fully automated. As the acquisition of the frequency response progresses, an on-line algorithm keeps track of the actuator force distribution that maximizes the structural response to automatically tune to a structural mode when approaching a resonant frequency. This tuning is insensitive to delays, ill-conditioning, and nonproportional damping. Experimental results show that is useful for modal surveys even in high modal density regions. For thorough modeling, a constructive procedure is proposed to identify the dynamics of a complex system from its frequency response with the minimization of a least-squares cost function as a desirable objective. This procedure relies on off-line modal separation algorithms to extract modal information and on least-squares parameter subset optimization to combine the modal results and globally fit the modal parameters to the measured data. The modal separation algorithms resolved modal density of 5 modes/Hz in the ASCIE experiment. They promise to be useful in many challenging applications.
Bonmati, Ester; Hu, Yipeng; Gibson, Eli; Uribarri, Laura; Keane, Geri; Gurusami, Kurinchi; Davidson, Brian; Pereira, Stephen P; Clarkson, Matthew J; Barratt, Dean C
2018-06-01
Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.
Segmentation of multiple heart cavities in 3-D transesophageal ultrasound images.
Haak, Alexander; Vegas-Sánchez-Ferrero, Gonzalo; Mulder, Harriët W; Ren, Ben; Kirişli, Hortense A; Metz, Coert; van Burken, Gerard; van Stralen, Marijn; Pluim, Josien P W; van der Steen, Antonius F W; van Walsum, Theo; Bosch, Johannes G
2015-06-01
Three-dimensional transesophageal echocardiography (TEE) is an excellent modality for real-time visualization of the heart and monitoring of interventions. To improve the usability of 3-D TEE for intervention monitoring and catheter guidance, automated segmentation is desired. However, 3-D TEE segmentation is still a challenging task due to the complex anatomy with multiple cavities, the limited TEE field of view, and typical ultrasound artifacts. We propose to segment all cavities within the TEE view with a multi-cavity active shape model (ASM) in conjunction with a tissue/blood classification based on a gamma mixture model (GMM). 3-D TEE image data of twenty patients were acquired with a Philips X7-2t matrix TEE probe. Tissue probability maps were estimated by a two-class (blood/tissue) GMM. A statistical shape model containing the left ventricle, right ventricle, left atrium, right atrium, and aorta was derived from computed tomography angiography (CTA) segmentations by principal component analysis. ASMs of the whole heart and individual cavities were generated and consecutively fitted to tissue probability maps. First, an average whole-heart model was aligned with the 3-D TEE based on three manually indicated anatomical landmarks. Second, pose and shape of the whole-heart ASM were fitted by a weighted update scheme excluding parts outside of the image sector. Third, pose and shape of ASM for individual heart cavities were initialized by the previous whole heart ASM and updated in a regularized manner to fit the tissue probability maps. The ASM segmentations were validated against manual outlines by two observers and CTA derived segmentations. Dice coefficients and point-to-surface distances were used to determine segmentation accuracy. ASM segmentations were successful in 19 of 20 cases. The median Dice coefficient for all successful segmentations versus the average observer ranged from 90% to 71% compared with an inter-observer range of 95% to 84%. The agreement against the CTA segmentations was slightly lower with a median Dice coefficient between 85% and 57%. In this work, we successfully showed the accuracy and robustness of the proposed multi-cavity segmentation scheme. This is a promising development for intraoperative procedure guidance, e.g., in cardiac electrophysiology.
Planning of electroporation-based treatments using Web-based treatment-planning software.
Pavliha, Denis; Kos, Bor; Marčan, Marija; Zupanič, Anže; Serša, Gregor; Miklavčič, Damijan
2013-11-01
Electroporation-based treatment combining high-voltage electric pulses and poorly permanent cytotoxic drugs, i.e., electrochemotherapy (ECT), is currently used for treating superficial tumor nodules by following standard operating procedures. Besides ECT, another electroporation-based treatment, nonthermal irreversible electroporation (N-TIRE), is also efficient at ablating deep-seated tumors. To perform ECT or N-TIRE of deep-seated tumors, following standard operating procedures is not sufficient and patient-specific treatment planning is required for successful treatment. Treatment planning is required because of the use of individual long-needle electrodes and the diverse shape, size and location of deep-seated tumors. Many institutions that already perform ECT of superficial metastases could benefit from treatment-planning software that would enable the preparation of patient-specific treatment plans. To this end, we have developed a Web-based treatment-planning software for planning electroporation-based treatments that does not require prior engineering knowledge from the user (e.g., the clinician). The software includes algorithms for automatic tissue segmentation and, after segmentation, generation of a 3D model of the tissue. The procedure allows the user to define how the electrodes will be inserted. Finally, electric field distribution is computed, the position of electrodes and the voltage to be applied are optimized using the 3D model and a downloadable treatment plan is made available to the user.
Vrooman, Henri A; Cocosco, Chris A; van der Lijn, Fedde; Stokking, Rik; Ikram, M Arfan; Vernooij, Meike W; Breteler, Monique M B; Niessen, Wiro J
2007-08-01
Conventional k-Nearest-Neighbor (kNN) classification, which has been successfully applied to classify brain tissue in MR data, requires training on manually labeled subjects. This manual labeling is a laborious and time-consuming procedure. In this work, a new fully automated brain tissue classification procedure is presented, in which kNN training is automated. This is achieved by non-rigidly registering the MR data with a tissue probability atlas to automatically select training samples, followed by a post-processing step to keep the most reliable samples. The accuracy of the new method was compared to rigid registration-based training and to conventional kNN-based segmentation using training on manually labeled subjects for segmenting gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) in 12 data sets. Furthermore, for all classification methods, the performance was assessed when varying the free parameters. Finally, the robustness of the fully automated procedure was evaluated on 59 subjects. The automated training method using non-rigid registration with a tissue probability atlas was significantly more accurate than rigid registration. For both automated training using non-rigid registration and for the manually trained kNN classifier, the difference with the manual labeling by observers was not significantly larger than inter-observer variability for all tissue types. From the robustness study, it was clear that, given an appropriate brain atlas and optimal parameters, our new fully automated, non-rigid registration-based method gives accurate and robust segmentation results. A similarity index was used for comparison with manually trained kNN. The similarity indices were 0.93, 0.92 and 0.92, for CSF, GM and WM, respectively. It can be concluded that our fully automated method using non-rigid registration may replace manual segmentation, and thus that automated brain tissue segmentation without laborious manual training is feasible.
Duque, Juan C; Tabbara, Marwan; Martinez, Laisel; Paez, Angela; Selman, Guillermo; Salman, Loay H; Velazquez, Omaida C; Vazquez-Padron, Roberto I
2018-04-01
Intimal hyperplasia has been historically associated with improper venous remodeling and stenosis after creation of an arteriovenous fistula. Recently, however, we showed that intimal hyperplasia by itself does not explain the failure of maturation of 2-stage arteriovenous fistulas. We seek to evaluate whether intimal hyperplasia plays a role in the development of focal stenosis of an arteriovenous fistula. This study compares intimal hyperplasia lesions in stenotic and nearby nonstenotic segments collected from the same arteriovenous fistula. Focal areas of stenosis were detected in the operating room in patients (n= 14) undergoing the second-stage vein transposition procedure. The entire vein was inspected, and areas of stenosis were visually located with the aid of manual palpation and hemodynamic changes in the vein peripheral and central to the narrowing. Stenotic and nonstenotic segments were documented by photography before tissue collection (14 tissue pairs). Intimal area and thickness, intima-media thickness, and intima to media area ratio were measured in hematoxylin and eosin stained cross-sections followed by pairwise statistical comparisons. The intimal area in stenotic and nonstenotic segments ranged from 1.25 to 11.61 mm 2 and 1.29 to 5.81 mm 2 , respectively. There was no significant difference between these 2 groups (P=.26). Maximal intimal thickness (P=.22), maximal intima-media thickness (P=.13), and intima to media area ratio (P=.73) were also similar between both types of segments. This preliminary study indicates that postoperative intimal hyperplasia by itself is not associated with the development of focal venous stenosis in 2-stage fistulas. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hatze, Herbert; Baca, Arnold
1993-01-01
The development of noninvasive techniques for the determination of biomechanical body segment parameters (volumes, masses, the three principal moments of inertia, the three local coordinates of the segmental mass centers, etc.) receives increasing attention from the medical sciences (e,.g., orthopaedic gait analysis), bioengineering, sport biomechanics, and the various space programs. In the present paper, a novel method is presented for determining body segment parameters rapidly and accurately. It is based on the video-image processing of four different body configurations and a finite mass-element human body model. The four video images of the subject in question are recorded against a black background, thus permitting the application of shape recognition procedures incorporating edge detection and calibration algorithms. In this way, a total of 181 object space dimensions of the subject's body segments can be reconstructed and used as anthropometric input data for the mathematical finite mass- element body model. The latter comprises 17 segments (abdomino-thoracic, head-neck, shoulders, upper arms, forearms, hands, abdomino-pelvic, thighs, lower legs, feet) and enables the user to compute all the required segment parameters for each of the 17 segments by means of the associated computer program. The hardware requirements are an IBM- compatible PC (1 MB memory) operating under MS-DOS or PC-DOS (Version 3.1 onwards) and incorporating a VGA-board with a feature connector for connecting it to a super video windows framegrabber board for which there must be available a 16-bit large slot. In addition, a VGA-monitor (50 - 70 Hz, horizontal scan rate at least 31.5 kHz), a common video camera and recorder, and a simple rectangular calibration frame are required. The advantage of the new method lies in its ease of application, its comparatively high accuracy, and in the rapid availability of the body segment parameters, which is particularly useful in clinical practice. An example of its practical application illustrates the technique.
Quantification of regional fat volume in rat MRI
NASA Astrophysics Data System (ADS)
Sacha, Jaroslaw P.; Cockman, Michael D.; Dufresne, Thomas E.; Trokhan, Darren
2003-05-01
Multiple initiatives in the pharmaceutical and beauty care industries are directed at identifying therapies for weight management. Body composition measurements are critical for such initiatives. Imaging technologies that can be used to measure body composition noninvasively include DXA (dual energy x-ray absorptiometry) and MRI (magnetic resonance imaging). Unlike other approaches, MRI provides the ability to perform localized measurements of fat distribution. Several factors complicate the automatic delineation of fat regions and quantification of fat volumes. These include motion artifacts, field non-uniformity, brightness and contrast variations, chemical shift misregistration, and ambiguity in delineating anatomical structures. We have developed an approach to deal practically with those challenges. The approach is implemented in a package, the Fat Volume Tool, for automatic detection of fat tissue in MR images of the rat abdomen, including automatic discrimination between abdominal and subcutaneous regions. We suppress motion artifacts using masking based on detection of implicit landmarks in the images. Adaptive object extraction is used to compensate for intensity variations. This approach enables us to perform fat tissue detection and quantification in a fully automated manner. The package can also operate in manual mode, which can be used for verification of the automatic analysis or for performing supervised segmentation. In supervised segmentation, the operator has the ability to interact with the automatic segmentation procedures to touch-up or completely overwrite intermediate segmentation steps. The operator's interventions steer the automatic segmentation steps that follow. This improves the efficiency and quality of the final segmentation. Semi-automatic segmentation tools (interactive region growing, live-wire, etc.) improve both the accuracy and throughput of the operator when working in manual mode. The quality of automatic segmentation has been evaluated by comparing the results of fully automated analysis to manual analysis of the same images. The comparison shows a high degree of correlation that validates the quality of the automatic segmentation approach.
[Anopexy according to Longo for hemorrhoids].
Ruppert, R
2016-11-01
The treatment for hemorrhoids ranges from conservative management to surgical procedures. The procedures are tailored to the individual grading of hemorrhoids and the individual complaints. The standard Goligher classification of the hemorrhoids is the basis for further treatment and no differentiation is made between segmental hemorrhoids and circular hemorrhoids. In the case of advanced circular hemorrhoid disease the surgical procedure with a stapler, so-called stapler anopexy, is the procedure of choice.
Estimates of Median Flows for Streams on the 1999 Kansas Surface Water Register
Perry, Charles A.; Wolock, David M.; Artman, Joshua C.
2004-01-01
The Kansas State Legislature, by enacting Kansas Statute KSA 82a?2001 et. seq., mandated the criteria for determining which Kansas stream segments would be subject to classification by the State. One criterion for the selection as a classified stream segment is based on the statistic of median flow being equal to or greater than 1 cubic foot per second. As specified by KSA 82a?2001 et. seq., median flows were determined from U.S. Geological Survey streamflow-gaging-station data by using the most-recent 10 years of gaged data (KSA) for each streamflow-gaging station. Median flows also were determined by using gaged data from the entire period of record (all-available hydrology, AAH). Least-squares multiple regression techniques were used, along with Tobit analyses, to develop equations for estimating median flows for uncontrolled stream segments. The drainage area of the gaging stations on uncontrolled stream segments used in the regression analyses ranged from 2.06 to 12,004 square miles. A logarithmic transformation of the data was needed to develop the best linear relation for computing median flows. In the regression analyses, the significant climatic and basin characteristics, in order of importance, were drainage area, mean annual precipitation, mean basin permeability, and mean basin slope. Tobit analyses of KSA data yielded a model standard error of prediction of 0.285 logarithmic units, and the best equations using Tobit analyses of AAH data had a model standard error of prediction of 0.250 logarithmic units. These regression equations and an interpolation procedure were used to compute median flows for the uncontrolled stream segments on the 1999 Kansas Surface Water Register. Measured median flows from gaging stations were incorporated into the regression-estimated median flows along the stream segments where available. The segments that were uncontrolled were interpolated using gaged data weighted according to the drainage area and the bias between the regression-estimated and gaged flow information. On controlled segments of Kansas streams, the median flow information was interpolated between gaging stations using only gaged data weighted by drainage area. Of the 2,232 total stream segments on the Kansas Surface Water Register, 34.5 percent of the segments had an estimated median streamflow of less than 1 cubic foot per second when the KSA analysis was used. When the AAH analysis was used, 36.2 percent of the segments had an estimated median streamflow of less than 1 cubic foot per second. This report supercedes U.S. Geological Survey Water-Resources Investigations Report 02?4292.
Extreme liver resections with preservation of segment 4 only
Balzan, Silvio Marcio Pegoraro; Gava, Vinícius Grando; Magalhães, Marcelo Arbo; Dotto, Marcelo Luiz
2017-01-01
AIM To evaluate safety and outcomes of a new technique for extreme hepatic resections with preservation of segment 4 only. METHODS The new method of extreme liver resection consists of a two-stage hepatectomy. The first stage involves a right hepatectomy with middle hepatic vein preservation and induction of left lobe congestion; the second stage involves a left lobectomy. Thus, the remnant liver is represented by the segment 4 only (with or without segment 1, ± S1). Five patients underwent the new two-stage hepatectomy (congestion group). Data from volumetric assessment made before the second stage was compared with that of 10 matched patients (comparison group) that underwent a single-stage right hepatectomy with middle hepatic vein preservation. RESULTS The two stages of the procedure were successfully carried out on all 5 patients. For the congestion group, the overall volume of the left hemiliver had increased 103% (mean increase from 438 mL to 890 mL) at 4 wk after the first stage of the procedure. Hypertrophy of the future liver remnant (i.e., segment 4 ± S1) was higher than that of segments 2 and 3 (144% vs 54%, respectively, P < 0.05). The median remnant liver volume-to-body weight ratio was 0.3 (range, 0.28-0.40) before the first stage and 0.8 (range, 0.45-0.97) before the second stage. For the comparison group, the rate of hypertrophy of the left liver after right hepatectomy with middle hepatic vein preservation was 116% ± 34%. Hypertrophy rates of segments 2 and 3 (123% ± 47%) and of segment 4 (108% ± 60%, P > 0.05) were proportional. The mean preoperative volume of segments 2 and 3 was 256 ± 64 cc and increased to 572 ± 257 cc after right hepatectomy. Mean preoperative volume of segment 4 increased from 211 ± 75 cc to 439 ± 180 cc after surgery. CONCLUSION The proposed method for extreme hepatectomy with preservation of segment 4 only represents a technique that could allow complete resection of multiple bilateral liver metastases. PMID:28765703
Automatic aortic root segmentation in CTA whole-body dataset
NASA Astrophysics Data System (ADS)
Gao, Xinpei; Kitslaar, Pieter H.; Scholte, Arthur J. H. A.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke; Reiber, Johan H. C.
2016-03-01
Trans-catheter aortic valve replacement (TAVR) is an evolving technique for patients with serious aortic stenosis disease. Typically, in this application a CTA data set is obtained of the patient's arterial system from the subclavian artery to the femoral arteries, to evaluate the quality of the vascular access route and analyze the aortic root to determine if and which prosthesis should be used. In this paper, we concentrate on the automated segmentation of the aortic root. The purpose of this study was to automatically segment the aortic root in computed tomography angiography (CTA) datasets to support TAVR procedures. The method in this study includes 4 major steps. First, the patient's cardiac CTA image was resampled to reduce the computation time. Next, the cardiac CTA image was segmented using an atlas-based approach. The most similar atlas was selected from a total of 8 atlases based on its image similarity to the input CTA image. Third, the aortic root segmentation from the previous step was transferred to the patient's whole-body CTA image by affine registration and refined in the fourth step using a deformable subdivision surface model fitting procedure based on image intensity. The pipeline was applied to 20 patients. The ground truth was created by an analyst who semi-automatically corrected the contours of the automatic method, where necessary. The average Dice similarity index between the segmentations of the automatic method and the ground truth was found to be 0.965±0.024. In conclusion, the current results are very promising.
Ughi, Giovanni J.; Gora, Michalina J.; Swager, Anne-Fré; Soomro, Amna; Grant, Catriona; Tiernan, Aubrey; Rosenberg, Mireille; Sauk, Jenny S.; Nishioka, Norman S.; Tearney, Guillermo J.
2016-01-01
Optical coherence tomography (OCT) is an optical diagnostic modality that can acquire cross-sectional images of the microscopic structure of the esophagus, including Barrett’s esophagus (BE) and associated dysplasia. We developed a swallowable tethered capsule OCT endomicroscopy (TCE) device that acquires high-resolution images of entire gastrointestinal (GI) tract luminal organs. This device has a potential to become a screening method that identifies patients with an abnormal esophagus that should be further referred for upper endoscopy. Currently, the characterization of the OCT-TCE esophageal wall data set is performed manually, which is time-consuming and inefficient. Additionally, since the capsule optics optimally focus light approximately 500 µm outside the capsule wall and the best quality images are obtained when the tissue is in full contact with the capsule, it is crucial to provide feedback for the operator about tissue contact during the imaging procedure. In this study, we developed a fully automated algorithm for the segmentation of in vivo OCT-TCE data sets and characterization of the esophageal wall. The algorithm provides a two-dimensional representation of both the contact map from the data collected in human clinical studies as well as a tissue map depicting areas of BE with or without dysplasia. Results suggest that these techniques can potentially improve the current TCE data acquisition procedure and provide an efficient characterization of the diseased esophageal wall. PMID:26977350
A segmentation/clustering model for the analysis of array CGH data.
Picard, F; Robin, S; Lebarbier, E; Daudin, J-J
2007-09-01
Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.
2014-01-01
Background Digital image analysis has the potential to address issues surrounding traditional histological techniques including a lack of objectivity and high variability, through the application of quantitative analysis. A key initial step in image analysis is the identification of regions of interest. A widely applied methodology is that of segmentation. This paper proposes the application of image analysis techniques to segment skin tissue with varying degrees of histopathological damage. The segmentation of human tissue is challenging as a consequence of the complexity of the tissue structures and inconsistencies in tissue preparation, hence there is a need for a new robust method with the capability to handle the additional challenges materialising from histopathological damage. Methods A new algorithm has been developed which combines enhanced colour information, created following a transformation to the L*a*b* colourspace, with general image intensity information. A colour normalisation step is included to enhance the algorithm’s robustness to variations in the lighting and staining of the input images. The resulting optimised image is subjected to thresholding and the segmentation is fine-tuned using a combination of morphological processing and object classification rules. The segmentation algorithm was tested on 40 digital images of haematoxylin & eosin (H&E) stained skin biopsies. Accuracy, sensitivity and specificity of the algorithmic procedure were assessed through the comparison of the proposed methodology against manual methods. Results Experimental results show the proposed fully automated methodology segments the epidermis with a mean specificity of 97.7%, a mean sensitivity of 89.4% and a mean accuracy of 96.5%. When a simple user interaction step is included, the specificity increases to 98.0%, the sensitivity to 91.0% and the accuracy to 96.8%. The algorithm segments effectively for different severities of tissue damage. Conclusions Epidermal segmentation is a crucial first step in a range of applications including melanoma detection and the assessment of histopathological damage in skin. The proposed methodology is able to segment the epidermis with different levels of histological damage. The basic method framework could be applied to segmentation of other epithelial tissues. PMID:24521154
Segmentation of bone and soft tissue regions in digital radiographic images of extremities
NASA Astrophysics Data System (ADS)
Pakin, S. Kubilay; Gaborski, Roger S.; Barski, Lori L.; Foos, David H.; Parker, Kevin J.
2001-07-01
This paper presents an algorithm for segmentation of computed radiography (CR) images of extremities into bone and soft tissue regions. The algorithm is a region-based one in which the regions are constructed using a growing procedure with two different statistical tests. Following the growing process, tissue classification procedure is employed. The purpose of the classification is to label each region as either bone or soft tissue. This binary classification goal is achieved by using a voting procedure that consists of clustering of regions in each neighborhood system into two classes. The voting procedure provides a crucial compromise between local and global analysis of the image, which is necessary due to strong exposure variations seen on the imaging plate. Also, the existence of regions whose size is large enough such that exposure variations can be observed through them makes it necessary to use overlapping blocks during the classification. After the classification step, resulting bone and soft tissue regions are refined by fitting a 2nd order surface to each tissue, and reevaluating the label of each region according to the distance between the region and surfaces. The performance of the algorithm is tested on a variety of extremity images using manually segmented images as gold standard. The experiments showed that our algorithm provided a bone boundary with an average area overlap of 90% compared to the gold standard.
Novel method to avoid the open-sky condition in penetrating keratoplasty: covered cornea technique.
Arslan, Osman S; Unal, Mustafa; Arici, Ceyhun; Cicik, Erdoğan; Mangan, Serhat; Atalay, Eray
2014-09-01
The aim of this study was to present a novel technique to avoid the open-sky condition in pediatric and adult penetrating keratoplasty (PK). Seventy-two eyes of 65 infants and children and 44 eyes of 44 adult patients were operated on using this technique. After trephining the recipient cornea up to a depth of 50% to 70%, the anterior chamber was entered at 1 point. Then, only a 2 clock hour segment of the recipient button was incised, and this segment was sutured to the recipient rim with a single tight suture. The procedure was repeated until the entire recipient button was excised and resutured. The donor corneal button was sutured to the recipient corneal rim. The sutures between the recipient button and the rim were then cut off, and the recipient button was drawn out. None of the patients operated on with this technique developed complications related to the open-sky condition. Visual acuities, graft failure rates, and endothelial cell loss were comparable with the findings of studies performed for conventional PK. The technique described avoids the open-sky condition during the entire PK procedure. Endothelial cell loss rates are acceptable.
Morphometric Atlas Selection for Automatic Brachial Plexus Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van de Velde, Joris, E-mail: joris.vandevelde@ugent.be; Department of Radiotherapy, Ghent University, Ghent; Wouters, Johan
Purpose: The purpose of this study was to determine the effects of atlas selection based on different morphometric parameters, on the accuracy of automatic brachial plexus (BP) segmentation for radiation therapy planning. The segmentation accuracy was measured by comparing all of the generated automatic segmentations with anatomically validated gold standard atlases developed using cadavers. Methods and Materials: Twelve cadaver computed tomography (CT) atlases (3 males, 9 females; mean age: 73 years) were included in the study. One atlas was selected to serve as a patient, and the other 11 atlases were registered separately onto this “patient” using deformable image registration. Thismore » procedure was repeated for every atlas as a patient. Next, the Dice and Jaccard similarity indices and inclusion index were calculated for every registered BP with the original gold standard BP. In parallel, differences in several morphometric parameters that may influence the BP segmentation accuracy were measured for the different atlases. Specific brachial plexus-related CT-visible bony points were used to define the morphometric parameters. Subsequently, correlations between the similarity indices and morphometric parameters were calculated. Results: A clear negative correlation between difference in protraction-retraction distance and the similarity indices was observed (mean Pearson correlation coefficient = −0.546). All of the other investigated Pearson correlation coefficients were weak. Conclusions: Differences in the shoulder protraction-retraction position between the atlas and the patient during planning CT influence the BP autosegmentation accuracy. A greater difference in the protraction-retraction distance between the atlas and the patient reduces the accuracy of the BP automatic segmentation result.« less
Small amounts of tissue preserve pancreatic function
Lu, Zipeng; Yin, Jie; Wei, Jishu; Dai, Cuncai; Wu, Junli; Gao, Wentao; Xu, Qing; Dai, Hao; Li, Qiang; Guo, Feng; Chen, Jianmin; Xi, Chunhua; Wu, Pengfei; Zhang, Kai; Jiang, Kuirong; Miao, Yi
2016-01-01
Abstract Middle-segment preserving pancreatectomy (MPP) is a novel procedure for treating multifocal lesions of the pancreas while preserving pancreatic function. However, long-term pancreatic function after this procedure remains unclear. The aims of this current study are to investigate short- and long-term outcomes, especially long-term pancreatic endocrine function, after MPP. From September 2011 to December 2015, 7 patients underwent MPP in our institution, and 5 cases with long-term outcomes were further analyzed in a retrospective manner. Percentage of tissue preservation was calculated using computed tomography volumetry. Serum insulin and C-peptide levels after oral glucose challenge were evaluated in 5 patients. Beta-cell secreting function including modified homeostasis model assessment of beta-cell function (HOMA2-beta), area under the curve (AUC) for C-peptide, and C-peptide index were evaluated and compared with those after pancreaticoduodenectomy (PD) and total pancreatectomy. Exocrine function was assessed based on questionnaires. Our case series included 3 women and 2 men, with median age of 50 (37–81) years. Four patients underwent pylorus-preserving PD together with distal pancreatectomy (DP), including 1 with spleen preserved. The remaining patient underwent Beger procedure and spleen-preserving DP. Median operation time and estimated intraoperative blood loss were 330 (250–615) min and 800 (400–5500) mL, respectively. Histological examination revealed 3 cases of metastatic lesion to the pancreas, 1 case of chronic pancreatitis, and 1 neuroendocrine tumor. Major postoperative complications included 3 cases of delayed gastric emptying and 2 cases of postoperative pancreatic fistula. Imaging studies showed that segments representing 18.2% to 39.5% of the pancreas with good blood supply had been preserved. With a median 35.0 months of follow-ups on pancreatic functions, only 1 patient developed new-onset diabetes mellitus of the 4 preoperatively euglycemic patients. Beta-cell function parameters in this group of patients were quite comparable to those after Whipple procedure, and seemed better than those after total pancreatectomy. No symptoms of hypoglycemia were identified in any patient, although half of the patients reported symptoms of exocrine insufficiency. In conclusion, MPP is a feasible and effective procedure for middle-segment sparing multicentric lesions in the pancreas, and patients exhibit satisfied endocrine function after surgery. PMID:27861351
Rubin, Jacob
1992-01-01
The feed forward (FF) method derives efficient operational equations for simulating transport of reacting solutes. It has been shown to be applicable in the presence of networks with any number of homogeneous and/or heterogeneous, classical reaction segments that consist of three, at most binary participants. Using a sequential (network type after network type) exploration approach and, independently, theoretical explanations, it is demonstrated for networks with classical reaction segments containing more than three, at most binary participants that if any one of such networks leads to a solvable transport problem then the FF method is applicable. Ways of helping to avoid networks that produce problem insolvability are developed and demonstrated. A previously suggested algebraic, matrix rank procedure has been adapted and augmented to serve as the main, easy-to-apply solvability test for already postulated networks. Four network conditions that often generate insolvability have been identified and studied. Their early detection during network formulation may help to avoid postulation of insolvable networks.
Shanmuganathan, Rajasekaran; Chandra Mohan, Arun Kamal; Agraharam, Devendra; Perumal, Ramesh; Jayaramaraju, Dheenadhayalan; Kulkarni, Sunil
2015-07-01
Extruded bone segments are rare complication of high energy open fractures. Routinely these fractures are treated by debridement followed by bone loss management in the form of either bone transport or free fibula transfer. There are very few reports in the literature about reimplantation of extruded segments of bone and there are no clear guidelines regarding timing of reimplantation, bone stabilisation and sterilisation techniques. Reimplantation of extruded bone is a risky procedure due to high chances of infection which determines the final outcome and can result in secondary amputations. We present two cases of successful reimplantation of extruded diaphyseal segment of femur and one case of reimplantation of extruded segment of tibia. Copyright © 2015 Elsevier Ltd. All rights reserved.
Human body segmentation via data-driven graph cut.
Li, Shifeng; Lu, Huchuan; Shao, Xingqing
2014-11-01
Human body segmentation is a challenging and important problem in computer vision. Existing methods usually entail a time-consuming training phase for prior knowledge learning with complex shape matching for body segmentation. In this paper, we propose a data-driven method that integrates top-down body pose information and bottom-up low-level visual cues for segmenting humans in static images within the graph cut framework. The key idea of our approach is first to exploit human kinematics to search for body part candidates via dynamic programming for high-level evidence. Then, by using the body parts classifiers, obtaining bottom-up cues of human body distribution for low-level evidence. All the evidence collected from top-down and bottom-up procedures are integrated in a graph cut framework for human body segmentation. Qualitative and quantitative experiment results demonstrate the merits of the proposed method in segmenting human bodies with arbitrary poses from cluttered backgrounds.
H-Ransac a Hybrid Point Cloud Segmentation Combining 2d and 3d Data
NASA Astrophysics Data System (ADS)
Adam, A.; Chatzilari, E.; Nikolopoulos, S.; Kompatsiaris, I.
2018-05-01
In this paper, we present a novel 3D segmentation approach operating on point clouds generated from overlapping images. The aim of the proposed hybrid approach is to effectively segment co-planar objects, by leveraging the structural information originating from the 3D point cloud and the visual information from the 2D images, without resorting to learning based procedures. More specifically, the proposed hybrid approach, H-RANSAC, is an extension of the well-known RANSAC plane-fitting algorithm, incorporating an additional consistency criterion based on the results of 2D segmentation. Our expectation that the integration of 2D data into 3D segmentation will achieve more accurate results, is validated experimentally in the domain of 3D city models. Results show that HRANSAC can successfully delineate building components like main facades and windows, and provide more accurate segmentation results compared to the typical RANSAC plane-fitting algorithm.
Youth Attitude Tracking Study II Wave 17 -- Fall 1986.
1987-06-01
decision, unless so designated by other official documentation. TABLE OF CONTENTS Page PREFACE ................................................. xi...Segmentation Analyses .......................... 2-7 .3. METHODOLOGY OF YATS II....................................... 3-1 A. Sampling Design Overview...Sampling Design , Estimation Procedures and Estimated Sampling Errors ................................. A-i Appendix B: Data Collection Procedures
Ramachandran, Rithambara; Cai, Cindy X.; Lee, Dongwon; Epstein, Benjamin C.; Locke, Kirsten G.; Birch, David G.; Hood, Donald C.
2016-01-01
Purpose We developed and evaluated a training procedure for marking the endpoints of the ellipsoid zone (EZ), also known as the inner segment/outer segment (IS/OS) border, on frequency domain optical coherence tomography (fdOCT) scans from patients with retinitis pigmentosa (RP). Methods A manual for marking EZ endpoints was developed and used to train 2 inexperienced graders. After training, an experienced grader and the 2 trained graders marked the endpoints on fdOCT horizontal line scans through the macula from 45 patients with RP. They marked the endpoints on these same scans again 1 month later. Results Intragrader agreement was excellent. The intraclass correlation coefficient (ICC) was 0.99, the average difference of endpoint locations (19.6 μm) was close to 0 μm, and the 95% limits were between −284 and 323 μm, approximately ±1.1°. Intergrader agreement also was excellent. The ICC values were 0.98 (time 1) and 0.97 (time 2), the average difference among graders was close to zero, and the 95% limits of these differences was less than 350 μm, approximately 1.2°, for both test times. Conclusions While automated algorithms are becoming increasingly accurate, EZ endpoints still have to be verified manually and corrected when necessary. With training, the inter- and intragrader agreement of manually marked endpoints is excellent. Translational Relevance For clinical studies, the EZ endpoints can be marked by hand if a training procedure, including a manual, is used. The endpoint confidence intervals, well under ±2.0°, are considerably smaller than the 6° spacing for the typically used static visual field. PMID:27226930
Rohrbacher, Florian; Zwicky, André
2017-01-01
An antibacterial cyclic AS-48 protein was chemically synthesized by α-ketoacid–hydroxylamine (KAHA) ligation. Initial challenges associated with the exceptionally hydrophobic segments arising from the amphiphilic nature of the protein were resolved by the development of bespoke reaction conditions for hydrophobic segments, using hexafluoroisopropanol (HFIP) as a co-solvent. The synthetic protein displays similar biological activity and properties to those of the native protein. To support the current understanding of its antibacterial mode of action, we demonstrate the ability of AS-48 to be incorporated into synthetic multilamellar vesicles (MLVs). PMID:28580120
NASA Technical Reports Server (NTRS)
Payne, R. W. (Principal Investigator)
1981-01-01
The crop identification procedures used performed were for spring small grains and are conducive to automation. The performance of the machine processing techniques shows a significant improvement over previously evaluated technology; however, the crop calendars require additional development and refinements prior to integration into automated area estimation technology. The integrated technology is capable of producing accurate and consistent spring small grains proportion estimates. Barley proportion estimation technology was not satisfactorily evaluated because LANDSAT sample segment data was not available for high density barley of primary importance in foreign regions and the low density segments examined were not judged to give indicative or unequvocal results. Generally, the spring small grains technology is ready for evaluation in a pilot experiment focusing on sensitivity analysis to a variety of agricultural and meteorological conditions representative of the global environment.
NASA Astrophysics Data System (ADS)
Blaffert, Thomas; Wiemker, Rafael; Barschdorf, Hans; Kabus, Sven; Klinder, Tobias; Lorenz, Cristian; Schadewaldt, Nicole; Dharaiya, Ekta
2010-03-01
Automated segmentation of lung lobes in thoracic CT images has relevance for various diagnostic purposes like localization of tumors within the lung or quantification of emphysema. Since emphysema is a known risk factor for lung cancer, both purposes are even related to each other. The main steps of the segmentation pipeline described in this paper are the lung detector and the lung segmentation based on a watershed algorithm, and the lung lobe segmentation based on mesh model adaptation. The segmentation procedure was applied to data sets of the data base of the Image Database Resource Initiative (IDRI) that currently contains over 500 thoracic CT scans with delineated lung nodule annotations. We visually assessed the reliability of the single segmentation steps, with a success rate of 98% for the lung detection and 90% for lung delineation. For about 20% of the cases we found the lobe segmentation not to be anatomically plausible. A modeling confidence measure is introduced that gives a quantitative indication of the segmentation quality. For a demonstration of the segmentation method we studied the correlation between emphysema score and malignancy on a per-lobe basis.
A segmentation editing framework based on shape change statistics
NASA Astrophysics Data System (ADS)
Mostapha, Mahmoud; Vicory, Jared; Styner, Martin; Pizer, Stephen
2017-02-01
Segmentation is a key task in medical image analysis because its accuracy significantly affects successive steps. Automatic segmentation methods often produce inadequate segmentations, which require the user to manually edit the produced segmentation slice by slice. Because editing is time-consuming, an editing tool that enables the user to produce accurate segmentations by only drawing a sparse set of contours would be needed. This paper describes such a framework as applied to a single object. Constrained by the additional information enabled by the manually segmented contours, the proposed framework utilizes object shape statistics to transform the failed automatic segmentation to a more accurate version. Instead of modeling the object shape, the proposed framework utilizes shape change statistics that were generated to capture the object deformation from the failed automatic segmentation to its corresponding correct segmentation. An optimization procedure was used to minimize an energy function that consists of two terms, an external contour match term and an internal shape change regularity term. The high accuracy of the proposed segmentation editing approach was confirmed by testing it on a simulated data set based on 10 in-vivo infant magnetic resonance brain data sets using four similarity metrics. Segmentation results indicated that our method can provide efficient and adequately accurate segmentations (Dice segmentation accuracy increase of 10%), with very sparse contours (only 10%), which is promising in greatly decreasing the work expected from the user.
Performing label-fusion-based segmentation using multiple automatically generated templates.
Chakravarty, M Mallar; Steadman, Patrick; van Eede, Matthijs C; Calcott, Rebecca D; Gu, Victoria; Shaw, Philip; Raznahan, Armin; Collins, D Louis; Lerch, Jason P
2013-10-01
Classically, model-based segmentation procedures match magnetic resonance imaging (MRI) volumes to an expertly labeled atlas using nonlinear registration. The accuracy of these techniques are limited due to atlas biases, misregistration, and resampling error. Multi-atlas-based approaches are used as a remedy and involve matching each subject to a number of manually labeled templates. This approach yields numerous independent segmentations that are fused using a voxel-by-voxel label-voting procedure. In this article, we demonstrate how the multi-atlas approach can be extended to work with input atlases that are unique and extremely time consuming to construct by generating a library of multiple automatically generated templates of different brains (MAGeT Brain). We demonstrate the efficacy of our method for the mouse and human using two different nonlinear registration algorithms (ANIMAL and ANTs). The input atlases consist a high-resolution mouse brain atlas and an atlas of the human basal ganglia and thalamus derived from serial histological data. MAGeT Brain segmentation improves the identification of the mouse anterior commissure (mean Dice Kappa values (κ = 0.801), but may be encountering a ceiling effect for hippocampal segmentations. Applying MAGeT Brain to human subcortical structures improves segmentation accuracy for all structures compared to regular model-based techniques (κ = 0.845, 0.752, and 0.861 for the striatum, globus pallidus, and thalamus, respectively). Experiments performed with three manually derived input templates suggest that MAGeT Brain can approach or exceed the accuracy of multi-atlas label-fusion segmentation (κ = 0.894, 0.815, and 0.895 for the striatum, globus pallidus, and thalamus, respectively). Copyright © 2012 Wiley Periodicals, Inc.
Sembrano, Jonathan N; Horazdovsky, Ryan D; Sharma, Amit K; Yson, Sharon C; Santos, Edward R G; Polly, David W
2017-05-01
A retrospective comparative radiographic review. To evaluate the radiographic changes brought about by lordotic and nonlordotic cages on segmental and regional lumbar sagittal alignment and disk height in lateral lumbar interbody fusion (LLIF). The effects of cage design on operative level segmental lordosis in posterior interbody fusion procedures have been reported. However, there are no studies comparing the effect of sagittal implant geometry in LLIF. This is a comparative radiographic analysis of consecutive LLIF procedures performed with use of lordotic and nonlordotic interbody cages. Forty patients (61 levels) underwent LLIF. Average age was 57 years (range, 30-83 y). Ten-degree lordotic PEEK cages were used at 31 lumbar interbody levels, and nonlordotic cages were used at 30 levels. The following parameters were measured on preoperative and postoperative radiographs: segmental lordosis; anterior and posterior disk heights at operative level; segmental lordosis at supra-level and subjacent level; and overall lumbar (L1-S1) lordosis. Measurement changes for each cage group were compared using paired t test analysis. The use of lordotic cages in LLIF resulted in a significant increase in lordosis at operative levels (2.8 degrees; P=0.01), whereas nonlordotic cages did not (0.6 degrees; P=0.71) when compared with preoperative segmental lordosis. Anterior and posterior disk heights were significantly increased in both groups (P<0.01). Neither cage group showed significant change in overall lumbar lordosis (lordotic P=0.86 vs. nonlordotic P=0.25). Lordotic cages provided significant increase in operative level segmental lordosis compared with nonlordotic cages although overall lumbar lordosis remained unchanged. Anterior and posterior disk heights were significantly increased by both cages, providing basis for indirect spinal decompression.
What is the health care product?
France, K R; Grover, R
1992-06-01
Because of the current competitive environment, health care providers (hospitals, HMOs, physicians, and others) are constantly searching for better products and better means for delivering them. The health care product is often loosely defined as a service. The authors develop a more precise definition of the health care product, product line, and product mix. A bundle-of-elements concept is presented for the health care product. These conceptualizations help to address how health care providers can segment their market and position, promote, and price their products. Though the authors focus on hospitals, the concepts and procedures developed are applicable to other health care organizations.
NASA Technical Reports Server (NTRS)
Ricks, Glen A.
1988-01-01
The assembly test article (ATA) consisted of two live loaded redesigned solid rocket motor (RSRM) segments which were assembled and disassembled to simulate the actual flight segment stacking process. The test assembly joint was flight RSRM design, which included the J-joint insulation design and metal capture feature. The ATA test was performed mid-November through 24 December 1987, at Kennedy Space Center (KSC), Florida. The purpose of the test was: certification that vertical RSRM segment mating and separation could be accomplished without any damage; verification and modification of the procedures in the segment stacking/destacking documents; and certification of various GSE to be used for flight assembly and inspection. The RSRM vertical segment assembly/disassembly is possible without any damage to the insulation, metal parts, or seals. The insulation J-joint contact area was very close to the predicted values. Numerous deviations and changes to the planning documents were made to ensure the flight segments are effectively and correctly stacked. Various GSE were also certified for use on flight segments, and are discussed in detail.
Multiresolution multiscale active mask segmentation of fluorescence microscope images
NASA Astrophysics Data System (ADS)
Srinivasa, Gowri; Fickus, Matthew; Kovačević, Jelena
2009-08-01
We propose an active mask segmentation framework that combines the advantages of statistical modeling, smoothing, speed and flexibility offered by the traditional methods of region-growing, multiscale, multiresolution and active contours respectively. At the crux of this framework is a paradigm shift from evolving contours in the continuous domain to evolving multiple masks in the discrete domain. Thus, the active mask framework is particularly suited to segment digital images. We demonstrate the use of the framework in practice through the segmentation of punctate patterns in fluorescence microscope images. Experiments reveal that statistical modeling helps the multiple masks converge from a random initial configuration to a meaningful one. This obviates the need for an involved initialization procedure germane to most of the traditional methods used to segment fluorescence microscope images. While we provide the mathematical details of the functions used to segment fluorescence microscope images, this is only an instantiation of the active mask framework. We suggest some other instantiations of the framework to segment different types of images.
Pascucci, Simone; Bassani, Cristiana; Palombo, Angelo; Poscolieri, Maurizio; Cavalli, Rosa
2008-02-22
This paper describes a fast procedure for evaluating asphalt pavement surface defects using airborne emissivity data. To develop this procedure, we used airborne multispectral emissivity data covering an urban test area close to Venice (Italy).For this study, we first identify and select the roads' asphalt pavements on Multispectral Infrared Visible Imaging Spectrometer (MIVIS) imagery using a segmentation procedure. Next, since in asphalt pavements the surface defects are strictly related to the decrease of oily components that cause an increase of the abundance of surfacing limestone, the diagnostic absorption emissivity peak at 11.2μm of the limestone was used for retrieving from MIVIS emissivity data the areas exhibiting defects on asphalt pavements surface.The results showed that MIVIS emissivity allows establishing a threshold that points out those asphalt road sites on which a check for a maintenance intervention is required. Therefore, this technique can supply local government authorities an efficient, rapid and repeatable road mapping procedure providing the location of the asphalt pavements to be checked.
Augmented Reality Image Guidance in Minimally Invasive Prostatectomy
NASA Astrophysics Data System (ADS)
Cohen, Daniel; Mayer, Erik; Chen, Dongbin; Anstee, Ann; Vale, Justin; Yang, Guang-Zhong; Darzi, Ara; Edwards, Philip'eddie'
This paper presents our work aimed at providing augmented reality (AR) guidance of robot-assisted laparoscopic surgery (RALP) using the da Vinci system. There is a good clinical case for guidance due to the significant rate of complications and steep learning curve for this procedure. Patients who were due to undergo robotic prostatectomy for organ-confined prostate cancer underwent preoperative 3T MRI scans of the pelvis. These were segmented and reconstructed to form 3D images of pelvic anatomy. The reconstructed image was successfully overlaid onto screenshots of the recorded surgery post-procedure. Surgeons who perform minimally-invasive prostatectomy took part in a user-needs analysis to determine the potential benefits of an image guidance system after viewing the overlaid images. All surgeons stated that the development would be useful at key stages of the surgery and could help to improve the learning curve of the procedure and improve functional and oncological outcomes. Establishing the clinical need in this way is a vital early step in development of an AR guidance system. We have also identified relevant anatomy from preoperative MRI. Further work will be aimed at automated registration to account for tissue deformation during the procedure, using a combination of transrectal ultrasound and stereoendoscopic video.
Synthetic aperture imaging in ultrasound calibration
NASA Astrophysics Data System (ADS)
Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Jayaranthe, Uditha L.; Chen, Elvis C. S.; Peters, Terry M.
2014-03-01
Ultrasound calibration allows for ultrasound images to be incorporated into a variety of interventional applica tions. Traditional Z- bar calibration procedures rely on wired phantoms with an a priori known geometry. The line fiducials produce small, localized echoes which are then segmented from an array of ultrasound images from different tracked probe positions. In conventional B-mode ultrasound, the wires at greater depths appear blurred and are difficult to segment accurately, limiting the accuracy of ultrasound calibration. This paper presents a novel ultrasound calibration procedure that takes advantage of synthetic aperture imaging to reconstruct high resolution ultrasound images at arbitrary depths. In these images, line fiducials are much more readily and accu rately segmented, leading to decreased calibration error. The proposed calibration technique is compared to one based on B-mode ultrasound. The fiducial localization error was improved from 0.21mm in conventional B-mode images to 0.15mm in synthetic aperture images corresponding to an improvement of 29%. This resulted in an overall reduction of calibration error from a target registration error of 2.00mm to 1.78mm, an improvement of 11%. Synthetic aperture images display greatly improved segmentation capabilities due to their improved resolution and interpretability resulting in improved calibration.
Badran, Mohamad Aboelnour; Moemen, Dalia Mohamed
2016-09-01
Autograft preparation for anterior cruciate ligament (ACL) reconstruction has a potential for graft contamination. The purpose of this study was to evaluate the possibility of bacterial contamination of hamstring autograft during preparation and when dropped onto the operating room floor and methods of graft decontamination. Sixty hamstring tendon autograft specimens were used as the test group. Excess tendon not used in the ACL procedure was divided into five segments. One segment, at the completion of preparation, was sent for culture as a control; the remaining four segments were dropped onto the floor adjacent to the surgical field for 15 seconds. One segment was cultured without undergoing any further treatment. Cultures were taken from each segment after immersion in 10 % povidone-iodine solution, 4 % chlorhexidine and bacitracin, respectively, for three minutes. Cultures of a skin swab and floor swab were taken at the same time and place that the ACL was dropped. Cultures of control graft tissue from ten patients (16.7 %) were positive for bacteria. No patient developed post-operative infection. Ninety organisms were identified, with Staphylococcus epidermidis being the most common isolate. Grafts rinsed in either bacitracin or 4 % chlorhexidine solutions were less likely to be culture positive. A high rate of contamination can be expected during autograft preparation for ACL reconstruction. Soaking the hamstring autograft in either bacitracin or 4 % chlorhexidine solution is effective for decontamination, particulary if graft is dropped on the floor.
New developments in ophthalmic applications of ultrafast lasers
NASA Astrophysics Data System (ADS)
Spooner, Greg J. R.; Juhasz, Tibor; Ratkay-Traub, Imola; Djotyan, Gagik P.; Horvath, Christopher; Sacks, Zachary S.; Marre, Gabrielle; Miller, Doug L.; Williams, A. R.; Kurtz, Ron M.
2000-05-01
The eye is potentially an ideal target for high precision surgical procedures utilizing ultrafast lasers. We present progress on corneal applications now being tested in humans and proof of concept ex vivo demonstrations of new applications in the sclera and lens. Two corneal refractive procedures were tested in partially sighted human eyes: creation of corneal flaps prior to excimer ablation (Femto- LASIK) and creation of corneal channels and entry cuts for placement of intracorneal ring segments (Femto-ICRS). For both procedures, results were comparable to standard treatments, with the potential for improved safety, accuracy and reproducibility. For scleral applications, we evaluated the potential of femtosecond laser glaucoma surgery by demonstrating resections in ex vivo human sclera using dehydrating agents to induce tissue transparency. For lens applications, we demonstrate in an ex vivo model the use of photodisruptively-nucleated ultrasonic cavitation for local and non-invasive tissue interaction.
Assistive technology for ultrasound-guided central venous catheter placement.
Ikhsan, Mohammad; Tan, Kok Kiong; Putra, Andi Sudjana
2018-01-01
This study evaluated the existing technology used to improve the safety and ease of ultrasound-guided central venous catheterization. Electronic database searches were conducted in Scopus, IEEE, Google Patents, and relevant conference databases (SPIE, MICCAI, and IEEE conferences) for related articles on assistive technology for ultrasound-guided central venous catheterization. A total of 89 articles were examined and pointed to several fields that are currently the focus of improvements to ultrasound-guided procedures. These include improving needle visualization, needle guides and localization technology, image processing algorithms to enhance and segment important features within the ultrasound image, robotic assistance using probe-mounted manipulators, and improving procedure ergonomics through in situ projections of important information. Probe-mounted robotic manipulators provide a promising avenue for assistive technology developed for freehand ultrasound-guided percutaneous procedures. However, there is currently a lack of clinical trials to validate the effectiveness of these devices.
A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system
NASA Astrophysics Data System (ADS)
Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan
2018-01-01
This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.
3D deeply supervised network for automated segmentation of volumetric medical images.
Dou, Qi; Yu, Lequan; Chen, Hao; Jin, Yueming; Yang, Xin; Qin, Jing; Heng, Pheng-Ann
2017-10-01
While deep convolutional neural networks (CNNs) have achieved remarkable success in 2D medical image segmentation, it is still a difficult task for CNNs to segment important organs or structures from 3D medical images owing to several mutually affected challenges, including the complicated anatomical environments in volumetric images, optimization difficulties of 3D networks and inadequacy of training samples. In this paper, we present a novel and efficient 3D fully convolutional network equipped with a 3D deep supervision mechanism to comprehensively address these challenges; we call it 3D DSN. Our proposed 3D DSN is capable of conducting volume-to-volume learning and inference, which can eliminate redundant computations and alleviate the risk of over-fitting on limited training data. More importantly, the 3D deep supervision mechanism can effectively cope with the optimization problem of gradients vanishing or exploding when training a 3D deep model, accelerating the convergence speed and simultaneously improving the discrimination capability. Such a mechanism is developed by deriving an objective function that directly guides the training of both lower and upper layers in the network, so that the adverse effects of unstable gradient changes can be counteracted during the training procedure. We also employ a fully connected conditional random field model as a post-processing step to refine the segmentation results. We have extensively validated the proposed 3D DSN on two typical yet challenging volumetric medical image segmentation tasks: (i) liver segmentation from 3D CT scans and (ii) whole heart and great vessels segmentation from 3D MR images, by participating two grand challenges held in conjunction with MICCAI. We have achieved competitive segmentation results to state-of-the-art approaches in both challenges with a much faster speed, corroborating the effectiveness of our proposed 3D DSN. Copyright © 2017 Elsevier B.V. All rights reserved.
Conjoint Analysis for New Service Development on Electricity Distribution in Indonesia
NASA Astrophysics Data System (ADS)
Widaningrum, D. L.; Chynthia; Astuti, L. D.; Seran, M. A. B.
2017-07-01
Many cases of illegal use of electricity in Indonesia is still rampant, especially for activities where the power source is not available, such as in the location of street vendors. It is not only detrimental to the state, but also harm the perpetrators of theft of electricity and the surrounding communities. The purpose of this study is to create New Service Development (NSD) to provide a new electricity source for street vendors' activity based on their preferences. The methods applied in NSD is Conjoint Analysis, Cluster Analysis, Quality Function Deployment (QFD), Service Blueprint, Process Flow Diagrams and Quality Control Plan. The results of this study are the attributes and their importance in the new electricity’s service based on street vendors’ preferences as customers, customer segmentation, service design for new service, designing technical response, designing operational procedures, the quality control plan of any existing operational procedures.
Ares I-X: On the Threshold of Exploration
NASA Technical Reports Server (NTRS)
Davis, Stephan R.; Askins, Bruce
2009-01-01
Ares I-X, the first flight of the Ares I crew launch vehicle, is less than a year from launch. Ares I-X will test the flight characteristics of Ares I from liftoff to first stage separation and recovery. The flight also will demonstrate the computer hardware and software (avionics) needed to control the vehicle; deploy the parachutes that allow the first stage booster to land in the ocean safely; measure and control how much the rocket rolls during flight; test and measure the effects of first stage separation; and develop and try out new ground handling and rocket stacking procedures in the Vehicle Assembly Building (VAB) and first stage recovery procedures at Kennedy Space Center (KSC) in Florida. All Ares I-X major elements have completed their critical design reviews, and are nearing final fabrication. The first stage--four-segment solid rocket booster from the Space Shuttle inventory--incorporates new simulated forward structures to match the Ares I five-segment booster. The upper stage, Orion crew module, and launch abort system will comprise simulator hardware that incorporates developmental flight instrumentation for essential data collection during the mission. The upper stage simulator consists of smaller cylindrical segments, which were transported to KSC in fall 2008. The crew module and launch abort system simulator were shipped in December 2008. The first stage hardware, active roll control system (RoCS), and avionics components will be delivered to KSC in 2009. This paper will provide detailed statuses of the Ares I-X hardware elements as NASA's Constellation Program prepares for this first flight of a new exploration era in the summer of 2009.
Huang, Huajun; Xiang, Chunling; Zeng, Canjun; Ouyang, Hanbin; Wong, Kelvin Kian Loong; Huang, Wenhua
2015-12-01
We improved the geometrical modeling procedure for fast and accurate reconstruction of orthopedic structures. This procedure consists of medical image segmentation, three-dimensional geometrical reconstruction, and assignment of material properties. The patient-specific orthopedic structures reconstructed by this improved procedure can be used in the virtual surgical planning, 3D printing of real orthopedic structures and finite element analysis. A conventional modeling consists of: image segmentation, geometrical reconstruction, mesh generation, and assignment of material properties. The present study modified the conventional method to enhance software operating procedures. Patient's CT images of different bones were acquired and subsequently reconstructed to give models. The reconstruction procedures were three-dimensional image segmentation, modification of the edge length and quantity of meshes, and the assignment of material properties according to the intensity of gravy value. We compared the performance of our procedures to the conventional procedures modeling in terms of software operating time, success rate and mesh quality. Our proposed framework has the following improvements in the geometrical modeling: (1) processing time: (femur: 87.16 ± 5.90 %; pelvis: 80.16 ± 7.67 %; thoracic vertebra: 17.81 ± 4.36 %; P < 0.05); (2) least volume reduction (femur: 0.26 ± 0.06 %; pelvis: 0.70 ± 0.47, thoracic vertebra: 3.70 ± 1.75 %; P < 0.01) and (3) mesh quality in terms of aspect ratio (femur: 8.00 ± 7.38 %; pelvis: 17.70 ± 9.82 %; thoracic vertebra: 13.93 ± 9.79 %; P < 0.05) and maximum angle (femur: 4.90 ± 5.28 %; pelvis: 17.20 ± 19.29 %; thoracic vertebra: 3.86 ± 3.82 %; P < 0.05). Our proposed patient-specific geometrical modeling requires less operating time and workload, but the orthopedic structures were generated at a higher rate of success as compared with the conventional method. It is expected to benefit the surgical planning of orthopedic structures with less operating time and high accuracy of modeling.
BlobContours: adapting Blobworld for supervised color- and texture-based image segmentation
NASA Astrophysics Data System (ADS)
Vogel, Thomas; Nguyen, Dinh Quyen; Dittmann, Jana
2006-01-01
Extracting features is the first and one of the most crucial steps in recent image retrieval process. While the color features and the texture features of digital images can be extracted rather easily, the shape features and the layout features depend on reliable image segmentation. Unsupervised image segmentation, often used in image analysis, works on merely syntactical basis. That is, what an unsupervised segmentation algorithm can segment is only regions, but not objects. To obtain high-level objects, which is desirable in image retrieval, human assistance is needed. Supervised image segmentations schemes can improve the reliability of segmentation and segmentation refinement. In this paper we propose a novel interactive image segmentation technique that combines the reliability of a human expert with the precision of automated image segmentation. The iterative procedure can be considered a variation on the Blobworld algorithm introduced by Carson et al. from EECS Department, University of California, Berkeley. Starting with an initial segmentation as provided by the Blobworld framework, our algorithm, namely BlobContours, gradually updates it by recalculating every blob, based on the original features and the updated number of Gaussians. Since the original algorithm has hardly been designed for interactive processing we had to consider additional requirements for realizing a supervised segmentation scheme on the basis of Blobworld. Increasing transparency of the algorithm by applying usercontrolled iterative segmentation, providing different types of visualization for displaying the segmented image and decreasing computational time of segmentation are three major requirements which are discussed in detail.
de Vries, W H K; Veeger, H E J; Cutti, A G; Baten, C; van der Helm, F C T
2010-07-20
Inertial Magnetic Measurement Systems (IMMS) are becoming increasingly popular by allowing for measurements outside the motion laboratory. The latest models enable long term, accurate measurement of segment motion in terms of joint angles, if initial segment orientations can accurately be determined. The standard procedure for definition of segmental orientation is based on the measurement of positions of bony landmarks (BLM). However, IMMS do not deliver position information, so an alternative method to establish IMMS based, anatomically understandable segment orientations is proposed. For five subjects, IMMS recordings were collected in a standard anatomical position for definition of static axes, and during a series of standardized motions for the estimation of kinematic axes of rotation. For all axes, the intra- and inter-individual dispersion was estimated. Subsequently, local coordinate systems (LCS) were constructed on the basis of the combination of IMMS axes with the lowest dispersion and compared with BLM based LCS. The repeatability of the method appeared to be high; for every segment at least two axes could be determined with a dispersion of at most 3.8 degrees. Comparison of IMMS based with BLM based LCS yielded compatible results for the thorax, but less compatible results for the humerus, forearm and hand, where differences in orientation rose to 17.2 degrees. Although different from the 'gold standard' BLM based LCS, IMMS based LCS can be constructed repeatable, enabling the estimation of segment orientations outside the laboratory. A procedure for the definition of local reference frames using IMMS is proposed. 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Elfarnawany, Mai; Alam, S. Riyahi; Agrawal, Sumit K.; Ladak, Hanif M.
2017-02-01
Cochlear implant surgery is a hearing restoration procedure for patients with profound hearing loss. In this surgery, an electrode is inserted into the cochlea to stimulate the auditory nerve and restore the patient's hearing. Clinical computed tomography (CT) images are used for planning and evaluation of electrode placement, but their low resolution limits the visualization of internal cochlear structures. Therefore, high resolution micro-CT images are used to develop atlas-based segmentation methods to extract these nonvisible anatomical features in clinical CT images. Accurate registration of the high and low resolution CT images is a prerequisite for reliable atlas-based segmentation. In this study, we evaluate and compare different non-rigid B-spline registration parameters using micro-CT and clinical CT images of five cadaveric human cochleae. The varying registration parameters are cost function (normalized correlation (NC), mutual information and mean square error), interpolation method (linear, windowed-sinc and B-spline) and sampling percentage (1%, 10% and 100%). We compare the registration results visually and quantitatively using the Dice similarity coefficient (DSC), Hausdorff distance (HD) and absolute percentage error in cochlear volume. Using MI or MSE cost functions and linear or windowed-sinc interpolation resulted in visually undesirable deformation of internal cochlear structures. Quantitatively, the transforms using 100% sampling percentage yielded the highest DSC and smallest HD (0.828+/-0.021 and 0.25+/-0.09mm respectively). Therefore, B-spline registration with cost function: NC, interpolation: B-spline and sampling percentage: moments 100% can be the foundation of developing an optimized atlas-based segmentation algorithm of intracochlear structures in clinical CT images.
Esophageal circumferential en bloc endoscopic submucosal dissection: assessment of a new technique.
Barret, Maximilien; Pratico, Carlos Alberto; Beuvon, Frédéric; Mangialavori, Luigi; Chryssostalis, Ariane; Camus, Marine; Chaussade, Stanislas; Prat, Frédéric
2013-10-01
Endoscopic esophageal piecemeal mucosectomy for high-grade dysplasia on Barrett's esophagus leads to suboptimal histologic evaluation, as well as recurrence on remaining mucosa. Circumferential en bloc mucosal resection would significantly improve the management of dysplastic Barrett's esophagus. Our aim was to describe a new method of esophageal circumferential endoscopic en bloc submucosal dissection (CESD) in a swine model. After submucosal injection, circumferential incision was performed at each end of the esophageal segment to be removed. Mechanical submucosal dissection was performed from the proximal to the distal incision, using a mucosectomy cap over the endoscope. The removed mucosal ring was retrieved. Clinical, endoscopic, and histologic data were prospectively collected. Esophageal CESD was conducted on 5 pigs. A median mucosal length of 6.5 cm (range, 4 to 8 cm) was removed in the lower third of the esophagus. The mean duration of the procedure was 36 minutes (range, 17 to 80 min). No procedure-related complication, including perforation, was observed. All animals exhibited a mild esophageal stricture at day 7, and a severe symptomatic stricture at day 14. Necropsy confirmed endoscopic findings with cicatricial fibrotic strictures. On histologic examination, an inflammatory cell infiltrate, diffuse fibrosis reaching the muscular layer, and incomplete reepithelialization were observed. CESD enables expeditious resection and thorough examination of large segments of esophageal mucosa in safe procedural conditions, but esophageal strictures occur in the majority of the cases. Efficient methods for stricture prevention are needed for this technique to be developed in humans.
Users manual for the US baseline corn and soybean segment classification procedure
NASA Technical Reports Server (NTRS)
Horvath, R.; Colwell, R. (Principal Investigator); Hay, C.; Metzler, M.; Mykolenko, O.; Odenweller, J.; Rice, D.
1981-01-01
A user's manual for the classification component of the FY-81 U.S. Corn and Soybean Pilot Experiment in the Foreign Commodity Production Forecasting Project of AgRISTARS is presented. This experiment is one of several major experiments in AgRISTARS designed to measure and advance the remote sensing technologies for cropland inventory. The classification procedure discussed is designed to produce segment proportion estimates for corn and soybeans in the U.S. Corn Belt (Iowa, Indiana, and Illinois) using LANDSAT data. The estimates are produced by an integrated Analyst/Machine procedure. The Analyst selects acquisitions, participates in stratification, and assigns crop labels to selected samples. In concert with the Analyst, the machine digitally preprocesses LANDSAT data to remove external effects, stratifies the data into field like units and into spectrally similar groups, statistically samples the data for Analyst labeling, and combines the labeled samples into a final estimate.
78 FR 18262 - Proposed Amendment of Class E Airspace; Ogallala, NE
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
... accommodate new Standard Instrument Approach Procedures (SIAP) at Searle Field Airport. The FAA is taking this action to enhance the safety and management of Instrument Flight Rules (IFR) operations for SIAPs at the... standard instrument approach procedures at Searle Field Airport, Ogallala, NE. A small segment would extend...
Off-Campus Registration Procedures.
ERIC Educational Resources Information Center
Maas, Michael L.
Registration is one of the more critical functions that a college staff encounters each semester. To have a smooth, efficient, college-wide registration, it is essential that all segments of the college be aware of registration procedures as well as data control operations. This packet was designed to acquaint interested parties with the…
A Stochastic-Variational Model for Soft Mumford-Shah Segmentation
2006-01-01
In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059
Ahmadian, Alireza; Ay, Mohammad R; Bidgoli, Javad H; Sarkar, Saeed; Zaidi, Habib
2008-10-01
Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (mumap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated mumaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in clinical setting. More importantly, correction of oral contrast artifacts improved the readability and interpretation of the PET scan and showed substantial decrease of the SUV (104.3%) after correction. An automated segmentation algorithm for classification of irregular shapes of regions containing contrast medium was developed for wider applicability of the SCC algorithm for correction of oral contrast artifacts during the CTAC procedure. The algorithm is being refined and further validated in clinical setting.
Orr, R Douglas; Sodhi, Nipun; Dalton, Sarah E; Khlopas, Anton; Sultan, Assem A; Chughtai, Morad; Newman, Jared M; Savage, Jason; Mroz, Thomas E; Mont, Michael A
2018-02-02
Relative value units (RVUs) are a compensation model based on the effort required to provide a procedure or service to a patient. Thus, procedures that are more complex and require greater technical skill and aftercare, such as multilevel spine surgery, should provide greater physician compensation. However, there are limited data comparing RVUs with operative time. Therefore, this study aims to compare mean (1) operative times; (2) RVUs; and (3) RVU/min between posterior segmental instrumentation of 3-6, 7-12, and ≥13 vertebral segments, and to perform annual cost difference analysis. A total of 437 patients who underwent instrumentation of 3-6 segments (Cohort 1, current procedural terminology [CPT] code: 22842), 67 patients who had instrumentation of 7-12 segments (Cohort 2, CPT code: 22843), and 16 patients who had instrumentation of ≥13 segments (Cohort 3, CPT code: 22844) were identified from the National Surgical Quality Improvement Program (NSQIP) database. Mean operative times, RVUs, and RVU/min, as well as an annualized cost difference analysis, were calculated and compared using Student t test. This study received no funding from any party or entity. Cohort 1 had shorter mean operative times than Cohorts 2 and 3 (217 minutes vs. 325 minutes vs. 426 minutes, p<.05). Cohort 1 had a lower mean RVU than Cohorts 2 and 3 (12.6 vs. 13.4 vs. 16.4). Cohort 1 had a greater RVU/min than Cohorts 2 and 3 (0.08 vs. 0.05, p<.05; vs. 0.08 vs. 0.05, p>.05). A $112,432.12 annualized cost difference between Cohorts 1 and 2, a $176,744.76 difference between Cohorts 1 and 3, and a $64,312.55 difference between Cohorts 2 and 3 were calculated. The RVU/min takes into account not just the value provided but also the operative times required for highly complex cases. The RVU/min for fewer vertebral level instrumentation being greater (0.08 vs. 0.05), as well as the $177,000 annualized cost difference, indicates that compensation is not proportional to the added time, effort, and skill for more complex cases. Copyright © 2018 Elsevier Inc. All rights reserved.
A knowledge-based machine vision system for space station automation
NASA Technical Reports Server (NTRS)
Chipman, Laure J.; Ranganath, H. S.
1989-01-01
A simple knowledge-based approach to the recognition of objects in man-made scenes is being developed. Specifically, the system under development is a proposed enhancement to a robot arm for use in the space station laboratory module. The system will take a request from a user to find a specific object, and locate that object by using its camera input and information from a knowledge base describing the scene layout and attributes of the object types included in the scene. In order to use realistic test images in developing the system, researchers are using photographs of actual NASA simulator panels, which provide similar types of scenes to those expected in the space station environment. Figure 1 shows one of these photographs. In traditional approaches to image analysis, the image is transformed step by step into a symbolic representation of the scene. Often the first steps of the transformation are done without any reference to knowledge of the scene or objects. Segmentation of an image into regions generally produces a counterintuitive result in which regions do not correspond to objects in the image. After segmentation, a merging procedure attempts to group regions into meaningful units that will more nearly correspond to objects. Here, researchers avoid segmenting the image as a whole, and instead use a knowledge-directed approach to locate objects in the scene. The knowledge-based approach to scene analysis is described and the categories of knowledge used in the system are discussed.
Naumovich, S S; Naumovich, S A; Goncharenko, V G
2015-01-01
The objective of the present study was the development and clinical testing of a three-dimensional (3D) reconstruction method of teeth and a bone tissue of the jaw on the basis of CT images of the maxillofacial region. 3D reconstruction was performed using the specially designed original software based on watershed transformation. Computed tomograms in digital imaging and communications in medicine format obtained on multispiral CT and CBCT scanners were used for creation of 3D models of teeth and the jaws. The processing algorithm is realized in the stepwise threshold image segmentation with the placement of markers in the mode of a multiplanar projection in areas relating to the teeth and a bone tissue. The developed software initially creates coarse 3D models of the entire dentition and the jaw. Then, certain procedures specify the model of the jaw and cut the dentition into separate teeth. The proper selection of the segmentation threshold is very important for CBCT images having a low contrast and high noise level. The developed semi-automatic algorithm of multispiral and cone beam computed tomogram processing allows 3D models of teeth to be created separating them from a bone tissue of the jaws. The software is easy to install in a dentist's workplace, has an intuitive interface and takes little time in processing. The obtained 3D models can be used for solving a wide range of scientific and clinical tasks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Y; Li, X; Fishman, K
Purpose: In skin-cancer radiotherapy, the assessment of skin lesion is challenging, particularly with important features such as the depth and width hard to determine. The aim of this study is to develop interative segmentation method to delineate tumor boundary using high-frequency ultrasound images and to correlate the segmentation results with the histopathological tumor dimensions. Methods: We analyzed 6 patients who comprised a total of 10 skin lesions involving the face, scalp, and hand. The patient’s various skin lesions were scanned using a high-frequency ultrasound system (Episcan, LONGPORT, INC., PA, U.S.A), with a 30-MHz single-element transducer. The lateral resolution was 14.6more » micron and the axial resolution was 3.85 micron for the ultrasound image. Semiautomatic image segmentation was performed to extract the cancer region, using a robust statistics driven active contour algorithm. The corresponding histology images were also obtained after tumor resection and served as the reference standards in this study. Results: Eight out of the 10 lesions are successfully segmented. The ultrasound tumor delineation correlates well with the histology assessment, in all the measurements such as depth, size, and shape. The depths measured by the ultrasound have an average of 9.3% difference comparing with that in the histology images. The remaining 2 cases suffered from the situation of mismatching between pathology and ultrasound images. Conclusion: High-frequency ultrasound is a noninvasive, accurate and easy-accessible modality to image skin cancer. Our segmentation method, combined with high-frequency ultrasound technology, provides a promising tool to estimate the extent of the tumor to guide the radiotherapy procedure and monitor treatment response.« less
NASA Astrophysics Data System (ADS)
Morfa, Carlos Recarey; Farias, Márcio Muniz de; Morales, Irvin Pablo Pérez; Navarra, Eugenio Oñate Ibañez de; Valera, Roberto Roselló
2018-04-01
The influence of the microstructural heterogeneities is an important topic in the study of materials. In the context of computational mechanics, it is therefore necessary to generate virtual materials that are statistically equivalent to the microstructure under study, and to connect that geometrical description to the different numerical methods. Herein, the authors present a procedure to model continuous solid polycrystalline materials, such as rocks and metals, preserving their representative statistical grain size distribution. The first phase of the procedure consists of segmenting an image of the material into adjacent polyhedral grains representing the individual crystals. This segmentation allows estimating the grain size distribution, which is used as the input for an advancing front sphere packing algorithm. Finally, Laguerre diagrams are calculated from the obtained sphere packings. The centers of the spheres give the centers of the Laguerre cells, and their radii determine the cells' weights. The cell sizes in the obtained Laguerre diagrams have a distribution similar to that of the grains obtained from the image segmentation. That is why those diagrams are a convenient model of the original crystalline structure. The above-outlined procedure has been used to model real polycrystalline metallic materials. The main difference with previously existing methods lies in the use of a better particle packing algorithm.
Haebig, Eileen; Saffran, Jenny R; Ellis Weismer, Susan
2017-11-01
Word learning is an important component of language development that influences child outcomes across multiple domains. Despite the importance of word knowledge, word-learning mechanisms are poorly understood in children with specific language impairment (SLI) and children with autism spectrum disorder (ASD). This study examined underlying mechanisms of word learning, specifically, statistical learning and fast-mapping, in school-aged children with typical and atypical development. Statistical learning was assessed through a word segmentation task and fast-mapping was examined in an object-label association task. We also examined children's ability to map meaning onto newly segmented words in a third task that combined exposure to an artificial language and a fast-mapping task. Children with SLI had poorer performance on the word segmentation and fast-mapping tasks relative to the typically developing and ASD groups, who did not differ from one another. However, when children with SLI were exposed to an artificial language with phonemes used in the subsequent fast-mapping task, they successfully learned more words than in the isolated fast-mapping task. There was some evidence that word segmentation abilities are associated with word learning in school-aged children with typical development and ASD, but not SLI. Follow-up analyses also examined performance in children with ASD who did and did not have a language impairment. Children with ASD with language impairment evidenced intact statistical learning abilities, but subtle weaknesses in fast-mapping abilities. As the Procedural Deficit Hypothesis (PDH) predicts, children with SLI have impairments in statistical learning. However, children with SLI also have impairments in fast-mapping. Nonetheless, they are able to take advantage of additional phonological exposure to boost subsequent word-learning performance. In contrast to the PDH, children with ASD appear to have intact statistical learning, regardless of language status; however, fast-mapping abilities differ according to broader language skills. © 2017 Association for Child and Adolescent Mental Health.
Computational wing optimization and comparisons with experiment for a semi-span wing model
NASA Technical Reports Server (NTRS)
Waggoner, E. G.; Haney, H. P.; Ballhaus, W. F.
1978-01-01
A computational wing optimization procedure was developed and verified by an experimental investigation of a semi-span variable camber wing model in the NASA Ames Research Center 14 foot transonic wind tunnel. The Bailey-Ballhaus transonic potential flow analysis and Woodward-Carmichael linear theory codes were linked to Vanderplaats constrained minimization routine to optimize model configurations at several subsonic and transonic design points. The 35 deg swept wing is characterized by multi-segmented leading and trailing edge flaps whose hinge lines are swept relative to the leading and trailing edges of the wing. By varying deflection angles of the flap segments, camber and twist distribution can be optimized for different design conditions. Results indicate that numerical optimization can be both an effective and efficient design tool. The optimized configurations had as good or better lift to drag ratios at the design points as the best designs previously tested during an extensive parametric study.
Optimization of Compton-suppression and summing schemes for the TIGRESS HPGe detector array
NASA Astrophysics Data System (ADS)
Schumaker, M. A.; Svensson, C. E.; Andreoiu, C.; Andreyev, A.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D.; Boston, A. J.; Chakrawarthy, R. S.; Churchman, R.; Drake, T. E.; Finlay, P.; Garrett, P. E.; Grinyer, G. F.; Hackman, G.; Hyland, B.; Jones, B.; Maharaj, R.; Morton, A. C.; Pearson, C. J.; Phillips, A. A.; Sarazin, F.; Scraggs, H. C.; Smith, M. B.; Valiente-Dobón, J. J.; Waddington, J. C.; Watters, L. M.
2007-04-01
Methods of optimizing the performance of an array of Compton-suppressed, segmented HPGe clover detectors have been developed which rely on the physical position sensitivity of both the HPGe crystals and the Compton-suppression shields. These relatively simple analysis procedures promise to improve the precision of experiments with the TRIUMF-ISAC Gamma-Ray Escape-Suppressed Spectrometer (TIGRESS). Suppression schemes will improve the efficiency and peak-to-total ratio of TIGRESS for high γ-ray multiplicity events by taking advantage of the 20-fold segmentation of the Compton-suppression shields, while the use of different summing schemes will improve results for a wide range of experimental conditions. The benefits of these methods are compared for many γ-ray energies and multiplicities using a GEANT4 simulation, and the optimal physical configuration of the TIGRESS array under each set of conditions is determined.
Land Cover Classification in a Complex Urban-Rural Landscape with Quickbird Imagery
Moran, Emilio Federico.
2010-01-01
High spatial resolution images have been increasingly used for urban land use/cover classification, but the high spectral variation within the same land cover, the spectral confusion among different land covers, and the shadow problem often lead to poor classification performance based on the traditional per-pixel spectral-based classification methods. This paper explores approaches to improve urban land cover classification with Quickbird imagery. Traditional per-pixel spectral-based supervised classification, incorporation of textural images and multispectral images, spectral-spatial classifier, and segmentation-based classification are examined in a relatively new developing urban landscape, Lucas do Rio Verde in Mato Grosso State, Brazil. This research shows that use of spatial information during the image classification procedure, either through the integrated use of textural and spectral images or through the use of segmentation-based classification method, can significantly improve land cover classification performance. PMID:21643433
Image segmentation based upon topological operators: real-time implementation case study
NASA Astrophysics Data System (ADS)
Mahmoudi, R.; Akil, M.
2009-02-01
In miscellaneous applications of image treatment, thinning and crest restoring present a lot of interests. Recommended algorithms for these procedures are those able to act directly over grayscales images while preserving topology. But their strong consummation in term of time remains the major disadvantage in their choice. In this paper we present an efficient hardware implementation on RISC processor of two powerful algorithms of thinning and crest restoring developed by our team. Proposed implementation enhances execution time. A chain of segmentation applied to medical imaging will serve as a concrete example to illustrate the improvements brought thanks to the optimization techniques in both algorithm and architectural levels. The particular use of the SSE instruction set relative to the X86_32 processors (PIV 3.06 GHz) will allow a best performance for real time processing: a cadency of 33 images (512*512) per second is assured.
Atlas-based system for functional neurosurgery
NASA Astrophysics Data System (ADS)
Nowinski, Wieslaw L.; Yeo, Tseng T.; Yang, Guo L.; Dow, Douglas E.
1997-05-01
This paper addresses the development of an atlas-based system for preoperative functional neurosurgery planning and training, intraoperative support and postoperative analysis. The system is based on Atlas of Stereotaxy of the Human Brain by Schaltenbrand and Wahren used for interactive segmentation and labeling of clinical data in 2D/3D, and for assisting stereotactic targeting. The atlas microseries are digitized, enhanced, segmented, labeled, aligned and organized into mutually preregistered atlas volumes 3D models of the structures are also constructed. The atlas may be interactively registered with the actual patient's data. Several other features are also provided including data reformatting, visualization, navigation, mensuration, and stereotactic path display and editing in 2D/3D. The system increases the accuracy of target definition, reduces the time of planning and time of the procedure itself. It also constitutes a research platform for the construction of more advanced neurosurgery supporting tools and brain atlases.
Femtosecond lasers in ophthalmology: clinical applications in anterior segment surgery
NASA Astrophysics Data System (ADS)
Juhasz, Tibor; Nagy, Zoltan; Sarayba, Melvin; Kurtz, Ronald M.
2010-02-01
The human eye is a favored target for laser surgery due to its accessibility via the optically transparent ocular tissue. Femtosecond lasers with confined tissue effects and minimized collateral tissue damage are primary candidates for high precision intraocular surgery. The advent of compact diode-pumped femtosecond lasers, coupled with computer controlled beam delivery devices, enabled the development of high precision femtosecond laser for ophthalmic surgery. In this article, anterior segment femtosecond laser applications currently in clinical practice and investigation are reviewed. Corneal procedures evolved first and remain dominant due to easy targeting referenced from a contact surface, such as applanation lenses placed on the eye. Adding a high precision imaging technique, such as optical coherence tomography (OCT), can enable accurate targeting of tissue beyond the cornea, such as the crystalline lens. Initial clinical results of femtosecond laser cataract surgery are discussed in detail in the latter portion part of the article.
Santos, Rodrigo Mologni Gonçalves Dos; De Martino, José Mario; Passeri, Luis Augusto; Attux, Romis Ribeiro de Faissol; Haiter Neto, Francisco
2017-09-01
To develop a computer-based method for automating the repositioning of jaw segments in the skull during three-dimensional virtual treatment planning of orthognathic surgery. The method speeds up the planning phase of the orthognathic procedure, releasing surgeons from laborious and time-consuming tasks. The method finds the optimal positions for the maxilla, mandibular body, and bony chin in the skull. Minimization of cephalometric differences between measured and standard values is considered. Cone-beam computed tomographic images acquired from four preoperative patients with skeletal malocclusion were used for evaluating the method. Dentofacial problems of the four patients were rectified, including skeletal malocclusion, facial asymmetry, and jaw discrepancies. The results show that the method is potentially able to be used in routine clinical practice as support for treatment-planning decisions in orthognathic surgery. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
1978-01-01
The author has identified the following significant results. The initial CAS estimates, which were made for each month from April through August, were considerably higher than the USDA/SRS estimates. This was attributed to: (1) the practice of considering bare ground as potential wheat and counting it as wheat; (2) overestimation of the wheat proportions in segments having only a small amount of wheat; and (3) the classification of confusion crops as wheat. At the end of the season most of the segments were reworked using improved methods based on experience gained during the season. In particular, new procedures were developed to solve the three problems listed above. These and other improvements used in the rework experiment resulted in at-harvest estimates that were much closer to the USDA/SRS estimates than those obtained during the regular season.
An Efficient Pipeline for Abdomen Segmentation in CT Images.
Koyuncu, Hasan; Ceylan, Rahime; Sivri, Mesut; Erdogan, Hasan
2018-04-01
Computed tomography (CT) scans usually include some disadvantages due to the nature of the imaging procedure, and these handicaps prevent accurate abdomen segmentation. Discontinuous abdomen edges, bed section of CT, patient information, closeness between the edges of the abdomen and CT, poor contrast, and a narrow histogram can be regarded as the most important handicaps that occur in abdominal CT scans. Currently, one or more handicaps can arise and prevent technicians obtaining abdomen images through simple segmentation techniques. In other words, CT scans can include the bed section of CT, a patient's diagnostic information, low-quality abdomen edges, low-level contrast, and narrow histogram, all in one scan. These phenomena constitute a challenge, and an efficient pipeline that is unaffected by handicaps is required. In addition, analysis such as segmentation, feature selection, and classification has meaning for a real-time diagnosis system in cases where the abdomen section is directly used with a specific size. A statistical pipeline is designed in this study that is unaffected by the handicaps mentioned above. Intensity-based approaches, morphological processes, and histogram-based procedures are utilized to design an efficient structure. Performance evaluation is realized in experiments on 58 CT images (16 training, 16 test, and 26 validation) that include the abdomen and one or more disadvantage(s). The first part of the data (16 training images) is used to detect the pipeline's optimum parameters, while the second and third parts are utilized to evaluate and to confirm the segmentation performance. The segmentation results are presented as the means of six performance metrics. Thus, the proposed method achieves remarkable average rates for training/test/validation of 98.95/99.36/99.57% (jaccard), 99.47/99.67/99.79% (dice), 100/99.91/99.91% (sensitivity), 98.47/99.23/99.85% (specificity), 99.38/99.63/99.87% (classification accuracy), and 98.98/99.45/99.66% (precision). In summary, a statistical pipeline performing the task of abdomen segmentation is achieved that is not affected by the disadvantages, and the most detailed abdomen segmentation study is performed for the use before organ and tumor segmentation, feature extraction, and classification.
Structural and thermal testing of lightweight reflector panels
NASA Technical Reports Server (NTRS)
Mcgregor, J.; Helms, R.; Hill, T.
1992-01-01
The paper describes the test facility developed for testing large lightweight reflective panels with very accurate and stable surfaces, such as the mirror panels of composite construction developed for the NASA's Precision Segmented Reflector (PSR). Special attention is given to the panel construction and the special problems posed by the characteristics of these panels; the design of the Optical/Thermal Vacuum test facility for structural and thermal testing, developed at the U.S. AFPL; and the testing procedure. The results of the PSR panel test program to date are presented. The test data showed that the analytical approaches used for the panel design and for the prediction of the on-orbit panel behavior were adequate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuhn, Heinz-Dieter.
The Visual to Infrared SASE Amplifier (VISA) [1] FEL is designed to achieve saturation at radiation wavelengths between 800 and 600 nm with a 4-m pure permanent magnet undulator. The undulator comprises four 99-cm segments each of which has four FODO focusing cells superposed on the beam by means of permanent magnets in the gap alongside the beam. Each segment will also have two beam position monitors and two sets of x-y dipole correctors. The trajectory walk-off in each segment will be reduced to a value smaller than the rms beam radius by means of magnet sorting, precise fabrication, andmore » post-fabrication shimming and trim magnets. However, this leaves possible inter-segment alignment errors. A trajectory analysis code has been used in combination with the FRED3D [2] FEL code to simulate the effect of the shimming procedure and segment alignment errors on the electron beam trajectory and to determine the sensitivity of the FEL gain process to trajectory errors. The paper describes the technique used to establish tolerances for the segment alignment.« less
Automatic segmentation of colon glands using object-graphs.
Gunduz-Demir, Cigdem; Kandemir, Melih; Tosun, Akif Burak; Sokmensuer, Cenk
2010-02-01
Gland segmentation is an important step to automate the analysis of biopsies that contain glandular structures. However, this remains a challenging problem as the variation in staining, fixation, and sectioning procedures lead to a considerable amount of artifacts and variances in tissue sections, which may result in huge variances in gland appearances. In this work, we report a new approach for gland segmentation. This approach decomposes the tissue image into a set of primitive objects and segments glands making use of the organizational properties of these objects, which are quantified with the definition of object-graphs. As opposed to the previous literature, the proposed approach employs the object-based information for the gland segmentation problem, instead of using the pixel-based information alone. Working with the images of colon tissues, our experiments demonstrate that the proposed object-graph approach yields high segmentation accuracies for the training and test sets and significantly improves the segmentation performance of its pixel-based counterparts. The experiments also show that the object-based structure of the proposed approach provides more tolerance to artifacts and variances in tissues.
Weberized Mumford-Shah Model with Bose-Einstein Photon Noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen Jianhong, E-mail: jhshen@math.umn.edu; Jung, Yoon-Mo
Human vision works equally well in a large dynamic range of light intensities, from only a few photons to typical midday sunlight. Contributing to such remarkable flexibility is a famous law in perceptual (both visual and aural) psychology and psychophysics known as Weber's Law. The current paper develops a new segmentation model based on the integration of Weber's Law and the celebrated Mumford-Shah segmentation model (Comm. Pure Appl. Math., vol. 42, pp. 577-685, 1989). Explained in detail are issues concerning why the classical Mumford-Shah model lacks light adaptivity, and why its 'weberized' version can more faithfully reflect human vision's superiormore » segmentation capability in a variety of illuminance conditions from dawn to dusk. It is also argued that the popular Gaussian noise model is physically inappropriate for the weberization procedure. As a result, the intrinsic thermal noise of photon ensembles is introduced based on Bose and Einstein's distributions in quantum statistics, which turns out to be compatible with weberization both analytically and computationally. The current paper focuses on both the theory and computation of the weberized Mumford-Shah model with Bose-Einstein noise. In particular, Ambrosio-Tortorelli's {gamma}-convergence approximation theory is adapted (Boll. Un. Mat. Ital. B, vol. 6, pp. 105-123, 1992), and stable numerical algorithms are developed for the associated pair ofnonlinear Euler-Lagrange PDEs.« less
Chou, Ying-Chao; Lee, Demei; Chang, Tzu-Min; Hsu, Yung-Heng; Yu, Yi-Hsun; Liu, Shih-Jung; Ueng, Steve Wen-Neng
2016-04-20
This study aimed to develop a new biodegradable polymeric cage to convert corticocancellous bone chips into a structured strut graft for treating segmental bone defects. A total of 24 adult New Zealand white rabbits underwent a left femoral segmental bone defect creation. Twelve rabbits in group A underwent three-dimensional (3D) printed cage insertion, corticocancellous chips implantation, and Kirschner-wire (K-wire) fixation, while the other 12 rabbits in group B received bone chips implantation and K-wire fixation only. All rabbits received a one-week activity assessment and the initial image study at postoperative 1 week. The final image study was repeated at postoperative 12 or 24 weeks before the rabbit scarification procedure on schedule. After the animals were sacrificed, both femurs of all the rabbits were prepared for leg length ratios and 3-point bending tests. The rabbits in group A showed an increase of activities during the first week postoperatively and decreased anterior cortical disruptions in the postoperative image assessments. Additionally, higher leg length ratios and 3-point bending strengths demonstrated improved final bony ingrowths within the bone defects for rabbits in group A. In conclusion, through this bone graft converting technique, orthopedic surgeons can treat segmental bone defects by using bone chips but with imitate characters of structured cortical bone graft.
Chou, Ying-Chao; Lee, Demei; Chang, Tzu-Min; Hsu, Yung-Heng; Yu, Yi-Hsun; Liu, Shih-Jung; Ueng, Steve Wen-Neng
2016-01-01
This study aimed to develop a new biodegradable polymeric cage to convert corticocancellous bone chips into a structured strut graft for treating segmental bone defects. A total of 24 adult New Zealand white rabbits underwent a left femoral segmental bone defect creation. Twelve rabbits in group A underwent three-dimensional (3D) printed cage insertion, corticocancellous chips implantation, and Kirschner-wire (K-wire) fixation, while the other 12 rabbits in group B received bone chips implantation and K-wire fixation only. All rabbits received a one-week activity assessment and the initial image study at postoperative 1 week. The final image study was repeated at postoperative 12 or 24 weeks before the rabbit scarification procedure on schedule. After the animals were sacrificed, both femurs of all the rabbits were prepared for leg length ratios and 3-point bending tests. The rabbits in group A showed an increase of activities during the first week postoperatively and decreased anterior cortical disruptions in the postoperative image assessments. Additionally, higher leg length ratios and 3-point bending strengths demonstrated improved final bony ingrowths within the bone defects for rabbits in group A. In conclusion, through this bone graft converting technique, orthopedic surgeons can treat segmental bone defects by using bone chips but with imitate characters of structured cortical bone graft. PMID:27104525
Psychological Distance to Reward: Effects of S+ Duration and the Delay Reduction It Signals
ERIC Educational Resources Information Center
Alessandri, Jerome; Stolarz-Fantino, Stephanie; Fantino, Edmund
2011-01-01
A concurrent-chains procedure was used to examine choice between segmented (two-component chained schedules) and unsegmented schedules (simple schedules) in terminal links with equal inter-reinforcement intervals. Previous studies using this kind of experimental procedure showed preference for unsegmented schedules for both pigeons and humans. In…
Degenerative changes of the canine cervical spine after discectomy procedures, an in vivo study.
Grunert, Peter; Moriguchi, Yu; Grossbard, Brian P; Ricart Arbona, Rodolfo J; Bonassar, Lawrence J; Härtl, Roger
2017-06-23
Discectomies are a common surgical treatment for disc herniations in the canine spine. However, the effect of these procedures on intervertebral disc tissue is not fully understood. The objective of this study was to assess degenerative changes of cervical spinal segments undergoing discectomy procedures, in vivo. Discectomies led to a 60% drop in disc height and 24% drop in foraminal height. Segments did not fuse but showed osteophyte formation as well as endplate sclerosis. MR imaging revealed terminal degenerative changes with collapse of the disc space and loss of T2 signal intensity. The endplates showed degenerative type II Modic changes. Quantitative MR imaging revealed that over 95% of Nucleus Pulposus tissue was extracted and that the nuclear as well as overall disc hydration significantly decreased. Histology confirmed terminal degenerative changes with loss of NP tissue, loss of Annulus Fibrosus organization and loss of cartilage endplate tissue. The bony endplate displayed sclerotic changes. Discectomies lead to terminal degenerative changes. Therefore, these procedures should be indicated with caution specifically when performed for prophylactic purposes.
[Target volume segmentation of PET images by an iterative method based on threshold value].
Castro, P; Huerga, C; Glaría, L A; Plaza, R; Rodado, S; Marín, M D; Mañas, A; Serrada, A; Núñez, L
2014-01-01
An automatic segmentation method is presented for PET images based on an iterative approximation by threshold value that includes the influence of both lesion size and background present during the acquisition. Optimal threshold values that represent a correct segmentation of volumes were determined based on a PET phantom study that contained different sizes spheres and different known radiation environments. These optimal values were normalized to background and adjusted by regression techniques to a two-variable function: lesion volume and signal-to-background ratio (SBR). This adjustment function was used to build an iterative segmentation method and then, based in this mention, a procedure of automatic delineation was proposed. This procedure was validated on phantom images and its viability was confirmed by retrospectively applying it on two oncology patients. The resulting adjustment function obtained had a linear dependence with the SBR and was inversely proportional and negative with the volume. During the validation of the proposed method, it was found that the volume deviations respect to its real value and CT volume were below 10% and 9%, respectively, except for lesions with a volume below 0.6 ml. The automatic segmentation method proposed can be applied in clinical practice to tumor radiotherapy treatment planning in a simple and reliable way with a precision close to the resolution of PET images. Copyright © 2013 Elsevier España, S.L.U. and SEMNIM. All rights reserved.
Aramburu, Jorge; Antón, Raúl; Rivas, Alejandro; Ramos, Juan Carlos; Sangro, Bruno; Bilbao, José Ignacio
2016-11-07
Liver radioembolization is a treatment option for patients with primary and secondary liver cancer. The procedure consists of injecting radiation-emitting microspheres via an intra-arterially placed microcatheter, enabling the deposition of the microspheres in the tumoral bed. The microcatheter location and the particle injection rate are determined during a pretreatment work-up. The purpose of this study was to numerically study the effects of the injection characteristics during the first stage of microsphere travel through the bloodstream in a patient-specific hepatic artery (i.e., the near-tip particle-hemodynamics and the segment-to-segment particle distribution). Specifically, the influence of the distal direction of an end-hole microcatheter and particle injection point and velocity were analyzed. Results showed that the procedure targeted the right lobe when injecting from two of the three injection points under study and the remaining injection point primarily targeted the left lobe. Changes in microcatheter direction and injection velocity resulted in an absolute difference in exiting particle percentage for a given liver segment of up to 20% and 30%, respectively. It can be concluded that even though microcatheter placement is presumably reproduced in the treatment session relative to the pretreatment angiography, the treatment may result in undesired segment-to-segment particle distribution and therefore undesired treatment outcomes due to modifications of any of the parameters studied, i.e., microcatheter direction and particle injection point and velocity. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye
2017-10-01
Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.
An automated retinal imaging method for the early diagnosis of diabetic retinopathy.
Franklin, S Wilfred; Rajan, S Edward
2013-01-01
Diabetic retinopathy is a microvascular complication of long-term diabetes and is the major cause for eyesight loss due to changes in blood vessels of the retina. Major vision loss due to diabetic retinopathy is highly preventable with regular screening and timely intervention at the earlier stages. Retinal blood vessel segmentation methods help to identify the successive stages of such sight threatening diseases like diabetes. To develop and test a novel retinal imaging method which segments the blood vessels automatically from retinal images, which helps the ophthalmologists in the diagnosis and follow-up of diabetic retinopathy. This method segments each image pixel as vessel or nonvessel, which in turn, used for automatic recognition of the vasculature in retinal images. Retinal blood vessels were identified by means of a multilayer perceptron neural network, for which the inputs were derived from the Gabor and moment invariants-based features. Back propagation algorithm, which provides an efficient technique to change the weights in a feed forward network, is utilized in our method. Quantitative results of sensitivity, specificity and predictive values were obtained in our method and the measured accuracy of our segmentation algorithm was 95.3%, which is better than that presented by state-of-the-art approaches. The evaluation procedure used and the demonstrated effectiveness of our automated retinal imaging method proves itself as the most powerful tool to diagnose diabetic retinopathy in the earlier stages.
Semiautomatic segmentation of the heart from CT images based on intensity and morphological features
NASA Astrophysics Data System (ADS)
Redwood, Abena B.; Camp, Jon J.; Robb, Richard A.
2005-04-01
The incidence of certain types of cardiac arrhythmias is increasing. Effective, minimally invasive treatment has remained elusive. Pharmacologic treatment has been limited by drug intolerance and recurrence of disease. Catheter based ablation has been moderately successful in treating certain types of cardiac arrhythmias, including typical atrial flutter and fibrillation, but there remains a relatively high rate of recurrence. Additional side effects associated with cardiac ablation procedures include stroke, perivascular lung damage, and skin burns caused by x-ray fluoroscopy. Access to patient specific 3-D cardiac images has potential to significantly improve the process of cardiac ablation by providing the physician with a volume visualization of the heart. This would facilitate more effective guidance of the catheter, increase the accuracy of the ablative process, and eliminate or minimize the damage to surrounding tissue. In this study, a semiautomatic method for faithful cardiac segmentation was investigated using Analyze - a comprehensive processing software package developed at the Biomedical Imaging Resource, Mayo Clinic. This method included use of interactive segmentation based on math morphology and separation of the chambers based on morphological connections. The external surfaces of the hearts were readily segmented, while accurate separation of individual chambers was a challenge. Nonetheless, a skilled operator could manage the task in a few minutes. Useful improvements suggested in this paper would give this method a promising future.
Hilbert, Sebastian; Sommer, Philipp; Gutberlet, Matthias; Gaspar, Thomas; Foldyna, Borek; Piorkowski, Christopher; Weiss, Steffen; Lloyd, Thomas; Schnackenburg, Bernhard; Krueger, Sascha; Fleiter, Christian; Paetsch, Ingo; Jahnke, Cosima; Hindricks, Gerhard; Grothoff, Matthias
2016-04-01
Recently cardiac magnetic resonance (CMR) imaging has been found feasible for the visualization of the underlying substrate for cardiac arrhythmias as well as for the visualization of cardiac catheters for diagnostic and ablation procedures. Real-time CMR-guided cavotricuspid isthmus ablation was performed in a series of six patients using a combination of active catheter tracking and catheter visualization using real-time MR imaging. Cardiac magnetic resonance utilizing a 1.5 T system was performed in patients under deep propofol sedation. A three-dimensional-whole-heart sequence with navigator technique and a fast automated segmentation algorithm was used for online segmentation of all cardiac chambers, which were thereafter displayed on a dedicated image guidance platform. In three out of six patients complete isthmus block could be achieved in the MR scanner, two of these patients did not need any additional fluoroscopy. In the first patient technical issues called for a completion of the procedure in a conventional laboratory, in another two patients the isthmus was partially blocked by magnetic resonance imaging (MRI)-guided ablation. The mean procedural time for the MR procedure was 109 ± 58 min. The intubation of the CS was performed within a mean time of 2.75 ± 2.21 min. Total fluoroscopy time for completion of the isthmus block ranged from 0 to 7.5 min. The combination of active catheter tracking and passive real-time visualization in CMR-guided electrophysiologic (EP) studies using advanced interventional hardware and software was safe and enabled efficient navigation, mapping, and ablation. These cases demonstrate significant progress in the development of MR-guided EP procedures. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.
Wang, Y; Wang, C; Zhang, Z
2018-05-01
Automated cell segmentation plays a key role in characterisations of cell behaviours for both biology research and clinical practices. Currently, the segmentation of clustered cells still remains as a challenge and is the main reason for false segmentation. In this study, the emphasis was put on the segmentation of clustered cells in negative phase contrast images. A new method was proposed to combine both light intensity and cell shape information through the construction of grey-weighted distance transform (GWDT) within preliminarily segmented areas. With the constructed GWDT, the clustered cells can be detected and then separated with a modified region skeleton-based method. Moreover, a contour expansion operation was applied to get optimised detection of cell boundaries. In this paper, the working principle and detailed procedure of the proposed method are described, followed by the evaluation of the method on clustered cell segmentation. Results show that the proposed method achieves an improved performance in clustered cell segmentation compared with other methods, with 85.8% and 97.16% accuracy rate for clustered cells and all cells, respectively. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 23 Highways 1 2014-04-01 2014-04-01 false Designation of Segments of Section 332(a)(2) Corridors as Parts of the Interstate System B Appendix B to Subpart A of Part 470 Highways FEDERAL HIGHWAY...) Corridors as Parts of the Interstate System The following guidance is comparable to current procedures for...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 23 Highways 1 2013-04-01 2013-04-01 false Designation of Segments of Section 332(a)(2) Corridors as Parts of the Interstate System B Appendix B to Subpart A of Part 470 Highways FEDERAL HIGHWAY...) Corridors as Parts of the Interstate System The following guidance is comparable to current procedures for...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 23 Highways 1 2012-04-01 2012-04-01 false Designation of Segments of Section 332(a)(2) Corridors as Parts of the Interstate System B Appendix B to Subpart A of Part 470 Highways FEDERAL HIGHWAY...) Corridors as Parts of the Interstate System The following guidance is comparable to current procedures for...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 23 Highways 1 2011-04-01 2011-04-01 false Designation of Segments of Section 332(a)(2) Corridors as Parts of the Interstate System B Appendix B to Subpart A of Part 470 Highways FEDERAL HIGHWAY...) Corridors as Parts of the Interstate System The following guidance is comparable to current procedures for...
1989-03-01
KOWLEDGE INFERENCE IMAGE DAAAEENGINE DATABASE Automated Photointerpretation Testbed. 4.1.7 Fig. .1.1-2 An Initial Segmentation of an Image / zx...MRF) theory provide a powerful alternative texture model and have resulted in intensive research activity in MRF model- based texture analysis...interpretation process. 5. Additional, and perhaps more powerful , features have to be incorporated into the image segmentation procedure. 6. Object detection
Group 4: Instructor training and qualifications
NASA Technical Reports Server (NTRS)
Sessa, R.
1981-01-01
Each professional instructor or check airman used in LOFT training course should complete an FAA approved training course in the appropriate aircraft type. Instructors used in such courses need not be type-rated. If an instructor or check airman who is presently not line-qualified is used as a LOFT instructor, he or she should remain current in line-operational procedures by observing operating procedures from the jump seat on three typical line segments pr 90 days on the appropriate aircraft type. ("Line qualification" means completion as a flight crew member of at least three typical line segments per 90 days on the appropriate aircraft type.) The training should include the requirement of four hours of LOFT training, in lieu of actual aircraft training or line operating experience.
Aeschlimann, Kimberly A; Mann, F A; Middleton, John R; Belter, Rebecca C
2018-05-01
OBJECTIVE To determine whether stored (cooled or frozen-thawed) jejunal segments can be used to obtain dependable leak pressure data after enterotomy closure. SAMPLE 36 jejunal segments from 3 juvenile pigs. PROCEDURES Jejunal segments were harvested from euthanized pigs and assigned to 1 of 3 treatment groups (n = 12 segments/group) as follows: fresh (used within 4 hours after collection), cooled (stored overnight at 5°C before use), and frozen-thawed (frozen at -12°C for 8 days and thawed at room temperature [23°C] for 1 hour before use). Jejunal segments were suspended and 2-cm enterotomy incisions were made on the antimesenteric border. Enterotomies were closed with a simple continuous suture pattern. Lactated Ringer solution was infused into each segment until failure at the suture line was detected. Leak pressure was measured by use of a digital transducer. RESULTS Mean ± SD leak pressure for fresh, cooled, and frozen-thawed segments was 68.3 ± 23.7 mm Hg, 55.3 ± 28.1 mm Hg, and 14.4 ± 14.8 mm Hg, respectively. Overall, there were no significant differences in mean leak pressure among pigs, but a significant difference in mean leak pressure was detected among treatment groups. Mean leak pressure was significantly lower for frozen-thawed segments than for fresh or cooled segments, but mean leak pressure did not differ significantly between fresh and cooled segments. CONCLUSIONS AND CLINICAL RELEVANCE Fresh porcine jejunal segments or segments cooled overnight may be used for determining intestinal leak pressure, but frozen-thawed segments should not be used.
A hierarchical stress release model for synthetic seismicity
NASA Astrophysics Data System (ADS)
Bebbington, Mark
1997-06-01
We construct a stochastic dynamic model for synthetic seismicity involving stochastic stress input, release, and transfer in an environment of heterogeneous strength and interacting segments. The model is not fault-specific, having a number of adjustable parameters with physical interpretation, namely, stress relaxation, stress transfer, stress dissipation, segment structure, strength, and strength heterogeneity, which affect the seismicity in various ways. Local parameters are chosen to be consistent with large historical events, other parameters to reproduce bulk seismicity statistics for the fault as a whole. The one-dimensional fault is divided into a number of segments, each comprising a varying number of nodes. Stress input occurs at each node in a simple random process, representing the slow buildup due to tectonic plate movements. Events are initiated, subject to a stochastic hazard function, when the stress on a node exceeds the local strength. An event begins with the transfer of excess stress to neighboring nodes, which may in turn transfer their excess stress to the next neighbor. If the event grows to include the entire segment, then most of the stress on the segment is transferred to neighboring segments (or dissipated) in a characteristic event. These large events may themselves spread to other segments. We use the Middle America Trench to demonstrate that this model, using simple stochastic stress input and triggering mechanisms, can produce behavior consistent with the historical record over five units of magnitude. We also investigate the effects of perturbing various parameters in order to show how the model might be tailored to a specific fault structure. The strength of the model lies in this ability to reproduce the behavior of a general linear fault system through the choice of a relatively small number of parameters. It remains to develop a procedure for estimating the internal state of the model from the historical observations in order to use the model for forward prediction.
Design and validation of Segment--freely available software for cardiovascular image analysis.
Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan
2010-01-11
Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se. Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.
Developing suitable methods for effective characterization of electrical properties of root segments
NASA Astrophysics Data System (ADS)
Ehosioke, Solomon; Phalempin, Maxime; Garré, Sarah; Kemna, Andreas; Huisman, Sander; Javaux, Mathieu; Nguyen, Frédéric
2017-04-01
The root system represents the hidden half of the plant which plays a key role in food production and therefore needs to be well understood. Root system characterization has been a great challenge because the roots are buried in the soil. This coupled with the subsurface heterogeneity and the transient nature of the biogeochemical processes that occur in the root zone makes it difficult to access and monitor the root system over time. The traditional method of point sampling (root excavation, monoliths, minirhizotron etc.) for root investigation does not account for the transient nature and spatial variability of the root zone, and it often disturbs the natural system under investigation. The quest to overcome these challenges has led to an increase in the application of geophysical methods. Recent studies have shown a correlation between bulk electrical resistivity and root mass density, but an understanding of the contribution of the individual segments of the root system to that bulk signal is still missing. This study is an attempt to understand the electrical properties of roots at the segment scale (1-5cm) for more effective characterization of electrical signal of the full root architecture. The target plants were grown in three different media (pot soil, hydroponics and a mixture of sand, perlite and vermiculite). Resistance measurements were carried out on a single segment of each study plant using a voltmeter while the diameter was measured using a digital calliper. The axial resistance was calculated using the measured resistance and the geometric parameters. This procedure was repeated for each plant replica over a period of 75 days which enabled us to study the effects of age, growth media, diameter and length on the electrical response of the root segments of the selected plants. The growth medium was found to have a significant effect on the root electrical response, while the effect of root diameter on their electrical response was found to vary among the plants. More work is still required to further validate these results and also to develop better systems to study the electrical behaviour of root segments. Findings from our review entitled "an overview of the geophysical approach to root investigation", suggest that SIP and EIT geophysical methods could be very useful for root investigations, thus more work is in progress to develop these systems for assessing the root electrical response at various scales.
NASA Technical Reports Server (NTRS)
Dejarnette, F. R.
1984-01-01
Concepts to save fuel while preserving airport capacity by combining time based metering with profile descent procedures were developed. A computer algorithm is developed to provide the flight crew with the information needed to fly from an entry fix to a metering fix and arrive there at a predetermined time, altitude, and airspeed. The flight from the metering fix to an aim point near the airport was calculated. The flight path is divided into several descent and deceleration segments. Descents are performed at constant Mach numbers or calibrated airspeed, whereas decelerations occur at constant altitude. The time and distance associated with each segment are calculated from point mass equations of motion for a clean configuration with idle thrust. Wind and nonstandard atmospheric properties have a large effect on the flight path. It is found that uncertainty in the descent Mach number has a large effect on the predicted flight time. Of the possible combinations of Mach number and calibrated airspeed for a descent, only small changes were observed in the fuel consumed.
Automated skin segmentation in ultrasonic evaluation of skin toxicity in breast cancer radiotherapy.
Gao, Yi; Tannenbaum, Allen; Chen, Hao; Torres, Mylin; Yoshida, Emi; Yang, Xiaofeng; Wang, Yuefeng; Curran, Walter; Liu, Tian
2013-11-01
Skin toxicity is the most common side effect of breast cancer radiotherapy and impairs the quality of life of many breast cancer survivors. We, along with other researchers, have recently found quantitative ultrasound to be effective as a skin toxicity assessment tool. Although more reliable than standard clinical evaluations (visual observation and palpation), the current procedure for ultrasound-based skin toxicity measurements requires manual delineation of the skin layers (i.e., epidermis-dermis and dermis-hypodermis interfaces) on each ultrasound B-mode image. Manual skin segmentation is time consuming and subjective. Moreover, radiation-induced skin injury may decrease image contrast between the dermis and hypodermis, which increases the difficulty of delineation. Therefore, we have developed an automatic skin segmentation tool (ASST) based on the active contour model with two significant modifications: (i) The proposed algorithm introduces a novel dual-curve scheme for the double skin layer extraction, as opposed to the original single active contour method. (ii) The proposed algorithm is based on a geometric contour framework as opposed to the previous parametric algorithm. This ASST algorithm was tested on a breast cancer image database of 730 ultrasound breast images (73 ultrasound studies of 23 patients). We compared skin segmentation results obtained with the ASST with manual contours performed by two physicians. The average percentage differences in skin thickness between the ASST measurement and that of each physician were less than 5% (4.8 ± 17.8% and -3.8 ± 21.1%, respectively). In summary, we have developed an automatic skin segmentation method that ensures objective assessment of radiation-induced changes in skin thickness. Our ultrasound technology offers a unique opportunity to quantify tissue injury in a more meaningful and reproducible manner than the subjective assessments currently employed in the clinic. Copyright © 2013 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Chiang, Michael; Hallman, Sam; Cinquin, Amanda; de Mochel, Nabora Reyes; Paz, Adrian; Kawauchi, Shimako; Calof, Anne L; Cho, Ken W; Fowlkes, Charless C; Cinquin, Olivier
2015-11-25
Analysis of single cells in their native environment is a powerful method to address key questions in developmental systems biology. Confocal microscopy imaging of intact tissues, followed by automatic image segmentation, provides a means to conduct cytometric studies while at the same time preserving crucial information about the spatial organization of the tissue and morphological features of the cells. This technique is rapidly evolving but is still not in widespread use among research groups that do not specialize in technique development, perhaps in part for lack of tools that automate repetitive tasks while allowing experts to make the best use of their time in injecting their domain-specific knowledge. Here we focus on a well-established stem cell model system, the C. elegans gonad, as well as on two other model systems widely used to study cell fate specification and morphogenesis: the pre-implantation mouse embryo and the developing mouse olfactory epithelium. We report a pipeline that integrates machine-learning-based cell detection, fast human-in-the-loop curation of these detections, and running of active contours seeded from detections to segment cells. The procedure can be bootstrapped by a small number of manual detections, and outperforms alternative pieces of software we benchmarked on C. elegans gonad datasets. Using cell segmentations to quantify fluorescence contents, we report previously-uncharacterized cell behaviors in the model systems we used. We further show how cell morphological features can be used to identify cell cycle phase; this provides a basis for future tools that will streamline cell cycle experiments by minimizing the need for exogenous cell cycle phase labels. High-throughput 3D segmentation makes it possible to extract rich information from images that are routinely acquired by biologists, and provides insights - in particular with respect to the cell cycle - that would be difficult to derive otherwise.
Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei
2016-01-01
Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762
Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei
2016-01-01
Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.
Shape-driven 3D segmentation using spherical wavelets.
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2006-01-01
This paper presents a novel active surface segmentation algorithm using a multiscale shape representation and prior. We define a parametric model of a surface using spherical wavelet functions and learn a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior in the segmentation framework. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to the segmentation of brain caudate nucleus, of interest in the study of schizophrenia. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm by capturing finer shape details.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruland, Robert
The Visible-Infrared SASE Amplifier (VISA) undulator consists of four 99cm long segments. Each undulator segment is set up on a pulsed-wire bench, to characterize the magnetic properties and to locate the magnetic axis of the FODO array. Subsequently, the location of the magnetic axis, as defined by the wire, is referenced to tooling balls on each magnet segment by means of a straightness interferometer. After installation in the vacuum chamber, the four magnet segments are aligned with respect to themselves and globally to the beam line reference laser. A specially designed alignment fixture is used to mount one straightness interferometermore » each in the horizontal and vertical plane of the beam. The goal of these procedures is to keep the combined rms trajectory error, due to magnetic and alignment errors, to 50{micro}m.« less
Selective thoracic ganglionectomy for the treatment of segmental neuropathic pain.
Weigel, R; Capelle, H H; Schmelz, M; Krauss, J K
2012-11-01
Segmental thoracic neuropathic pain (NeuP) remains particularly difficult to treat. Sensory ganglionectomy was reported to alleviate NeuP. The experience with thoracic ganglionectomy, however, is very limited. Here, we report the results of a prospective pilot study in patients with incapacitating segmental thoracic NeuP treated by selective ganglionectomy. Seven patients were included suffering from refractory NeuP scoring 8 or more on a visual analogue scale (VAS). Every patient had test anaesthesia prior to surgery yielding more than 50% pain relief. The spinal ganglion was excised completely via an extraforaminal approach. Mean preoperative VAS scores were 9.1 (maximum pain); 5.4 (minimum pain); 7.9 (pain on average); 6.9 (pain at the time of presentation); and 7.4 (allodynia). Early post-operatively, there was a marked improvement of mean scores: 1.7; 0.7; 1.2; 1.0; and 0.7, respectively. One patient developed a mild transient hemihypaesthesia. In three patients, substantial pain occurred in a formerly unaffected dermatome within 1 year. Two of these patients had significant pain relief by a second operation. At the time of last follow-up at a mean of 24 months after the first procedure, mean VAS scores were 6.3; 2.1; 4.3; 4.0; and 1.3. Overall, medication was reduced. The patients rated their outcome as excellent (1), good (2), fair (2) and nil (2) with best improvement for allodynia. Selective thoracic ganglionectomy is a safe and partially effective procedure in selected patients albeit there may be partial recurrence of pain. Recurrent pain may affect dermatomes that were not involved initially. © 2012 European Federation of International Association for the Study of Pain Chapters.
NASA Technical Reports Server (NTRS)
1997-01-01
Under a Small Business Innovation Research contract from Marshall Space Flight Center, Ultrafast, Inc. developed the world's first, high-temperature resistant, "intelligent" fastener. NASA needed a critical-fastening appraisal and validation of spacecraft segments that are coupled together in space. The intelligent-bolt technology deletes the self-defeating procedure of having to untighten the fastener, and thus upset the joint, during inspection and maintenance. The Ultrafast solution yielded an innovation that is likely to revolutionize manufacturing assembly, particularly the automobile industry. Other areas of application range from aircraft, computers and fork-lifts to offshore platforms, buildings, and bridges.
NASA Technical Reports Server (NTRS)
1981-01-01
General information and administrative instructions are provided for individuals gathering ground truth data to support research and development techniques for estimating crop acreage and production by remote sensing by satellite. Procedures are given for personal safety with regards to organophosphorus insecticides, for conducting interviews for periodic observations, for coding the crops identified and their growth stages, and for selecting sites for placing rain gages. Forms are included for those citizens agreeing to monitor the gages and record the rainfall. Segment selection is also considered.
Gompelmann, Daniela; Shah, Pallav L; Valipour, Arschang; Herth, Felix J F
2018-06-12
Bronchoscopic thermal vapor ablation (BTVA) represents one of the endoscopic lung volume reduction (ELVR) techniques that aims at hyperinflation reduction in patients with advanced emphysema to improve respiratory mechanics. By targeted segmental vapor ablation, an inflammatory response leads to tissue and volume reduction of the most diseased emphysematous segments. So far, BTVA has been demonstrated in several single-arm trials and 1 multinational randomized controlled trial to improve lung function, exercise capacity, and quality of life in patients with upper lobe-predominant emphysema irrespective of the collateral ventilation. In this review, we emphasize the practical aspects of this ELVR method. Patients with upper lobe-predominant emphysema, forced expiratory volume in 1 second (FEV1) between 20 and 45% of predicted, residual volume (RV) > 175% of predicted, and carbon monoxide diffusing capacity (DLCO) ≥20% of predicted can be considered for BTVA treatment. Prior to the procedure, a special software assists in identifying the target segments with the highest emphysema index, volume and the highest heterogeneity index to the untreated ipsilateral lung lobes. The procedure may be performed under deep sedation or preferably under general anesthesia. After positioning of the BTVA catheter and occlusion of the target segment by the occlusion balloon, heated water vapor is delivered in a predetermined specified time according to the vapor dose. After the procedure, patients should be strictly monitored to proactively detect symptoms of localized inflammatory reaction that may temporarily worsen the clinical status of the patient and to detect complications. As the data are still very limited, BTVA should be performed within clinical trials or comprehensive registries where the product is commercially available. © 2018 S. Karger AG, Basel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoven, Andor F. van den, E-mail: a.f.vandenhoven@umcutrecht.nl; Prince, Jip F.; Keizer, Bart de
PurposeTo optimize a C-arm computed tomography (CT) protocol for radioembolization (RE), specifically for extrahepatic shunting and parenchymal enhancement.Materials and MethodsA prospective development study was performed per IDEAL recommendations. A literature-based protocol was applied in patients with unresectable and chemorefractory liver malignancies undergoing an angiography before radioembolization. Contrast and scan settings were adjusted stepwise and repeatedly reviewed in a consensus meeting. Afterwards, two independent raters analyzed all scans. A third rater evaluated the SPECT/CT scans as a reference standard for extrahepatic shunting and lack of target segment perfusion.ResultsFifty scans were obtained in 29 procedures. The first protocol, using a 6 s delaymore » and 10 s scan, showed insufficient parenchymal enhancement. In the second protocol, the delay was determined by timing parenchymal enhancement on DSA power injection (median 8 s, range 4–10 s): enhancement improved, but breathing artifacts increased (from 0 to 27 %). Since the third protocol with a 5 s scan decremented subjective image quality, the second protocol was deemed optimal. Median CNR (range) was 1.7 (0.6–3.2), 2.2 (−1.4–4.0), and 2.1 (−0.3–3.0) for protocol 1, 2, and 3 (p = 0.80). Delineation of perfused segments was possible in 57, 73, and 44 % of scans (p = 0.13). In all C-arm CTs combined, the negative predictive value was 95 % for extrahepatic shunting and 83 % for lack of target segment perfusion.ConclusionAn optimized C-arm CT protocol was developed that can be used to detect extrahepatic shunts and non-perfusion of target segments during RE.« less
Preference mapping of lemon lime carbonated beverages with regular and diet beverage consumers.
Leksrisompong, P P; Lopetcharat, K; Guthrie, B; Drake, M A
2013-02-01
The drivers of liking of lemon-lime carbonated beverages were investigated with regular and diet beverage consumers. Ten beverages were selected from a category survey of commercial beverages using a D-optimal procedure. Beverages were subjected to consumer testing (n = 101 regular beverage consumers, n = 100 diet beverage consumers). Segmentation of consumers was performed on overall liking scores followed by external preference mapping of selected samples. Diet beverage consumers liked 2 diet beverages more than regular beverage consumers. There were no differences in the overall liking scores between diet and regular beverage consumers for other products except for a sparkling beverage sweetened with juice which was more liked by regular beverage consumers. Three subtle but distinct consumer preference clusters were identified. Two segments had evenly distributed diet and regular beverage consumers but one segment had a greater percentage of regular beverage consumers (P < 0.05). The 3 preference segments were named: cluster 1 (C1) sweet taste and carbonation mouthfeel lovers, cluster 2 (C2) carbonation mouthfeel lovers, sweet and bitter taste acceptors, and cluster 3 (C3) bitter taste avoiders, mouthfeel and sweet taste lovers. User status (diet or regular beverage consumers) did not have a large impact on carbonated beverage liking. Instead, mouthfeel attributes were major drivers of liking when these beverages were tested in a blind tasting. Preference mapping of lemon-lime carbonated beverage with diet and regular beverage consumers allowed the determination of drivers of liking of both populations. The understanding of how mouthfeel attributes, aromatics, and basic tastes impact liking or disliking of products was achieved. Preference drivers established in this study provide product developers of carbonated lemon-lime beverages with additional information to develop beverages that may be suitable for different groups of consumers. © 2013 Institute of Food Technologists®
Tellez, Armando; Rousselle, Serge; Palmieri, Taylor; Rate, William R; Wicks, Joan; Degrange, Ashley; Hyon, Chelsea M; Gongora, Carlos A; Hart, Randy; Grundy, Will; Kaluza, Greg L; Granada, Juan F
2013-12-01
Catheter-based renal artery denervation has demonstrated to be effective in decreasing blood pressure among patients with refractory hypertension. The anatomic distribution of renal artery nerves may influence the safety and efficacy profile of this procedure. We aimed to describe the anatomic distribution and density of periarterial renal nerves in the porcine model. Thirty arterial renal sections were included in the analysis by harvesting a tissue block containing the renal arteries and perirenal tissue from each animal. Each artery was divided into 3 segments (proximal, mid, and distal) and assessed for total number, size, and depth of the nerves according to the location. Nerve counts were greatest proximally (45.62% of the total nerves) and decreased gradually distally (mid, 24.58%; distal, 29.79%). The distribution in nerve size was similar across all 3 sections (∼40% of the nerves, 50-100 μm; ∼30%, 0-50 μm; ∼20%, 100-200 μm; and ∼10%, 200-500 μm). In the arterial segments ∼45% of the nerves were located within 2 mm from the arterial wall whereas ∼52% of all nerves were located within 2.5 mm from the arterial wall. Sympathetic efferent fibers outnumbered sensory afferent fibers overwhelmingly, intermixed within the nerve bundle. In the porcine model, renal artery nerves are seen more frequently in the proximal segment of the artery. Nerve size distribution appears to be homogeneous throughout the artery length. Nerve bundles progress closer to the arterial wall in the distal segments of the artery. This anatomic distribution may have implications for the future development of renal denervation therapies. Crown Copyright © 2013. Published by Mosby, Inc. All rights reserved.
Pascucci, Simone; Bassani, Cristiana; Palombo, Angelo; Poscolieri, Maurizio; Cavalli, Rosa
2008-01-01
This paper describes a fast procedure for evaluating asphalt pavement surface defects using airborne emissivity data. To develop this procedure, we used airborne multispectral emissivity data covering an urban test area close to Venice (Italy).For this study, we first identify and select the roads' asphalt pavements on Multispectral Infrared Visible Imaging Spectrometer (MIVIS) imagery using a segmentation procedure. Next, since in asphalt pavements the surface defects are strictly related to the decrease of oily components that cause an increase of the abundance of surfacing limestone, the diagnostic absorption emissivity peak at 11.2μm of the limestone was used for retrieving from MIVIS emissivity data the areas exhibiting defects on asphalt pavements surface.The results showed that MIVIS emissivity allows establishing a threshold that points out those asphalt road sites on which a check for a maintenance intervention is required. Therefore, this technique can supply local government authorities an efficient, rapid and repeatable road mapping procedure providing the location of the asphalt pavements to be checked. PMID:27879765
Flexible robotic catheters in the visceral segment of the aorta: advantages and limitations.
Li, Mimi M; Hamady, Mohamad S; Bicknell, Colin D; Riga, Celia V
2018-06-01
Flexible robotic catheters are an emerging technology which provide an elegant solution to the challenges of conventional endovascular intervention. Originally developed for interventional cardiology and electrophysiology procedures, remotely steerable robotic catheters such as the Magellan system enable greater precision and enhanced stability during target vessel navigation. These technical advantages facilitate improved treatment of disease in the arterial tree, as well as allowing execution of otherwise unfeasible procedures. Occupational radiation exposure is an emerging concern with the use of increasingly complex endovascular interventions. The robotic systems offer an added benefit of radiation reduction, as the operator is seated away from the radiation source during manipulation of the catheter. Pre-clinical studies have demonstrated reduction in force and frequency of vessel wall contact, resulting in reduced tissue trauma, as well as improved procedural times. Both safety and feasibility have been demonstrated in early clinical reports, with the first robot-assisted fenestrated endovascular aortic repair in 2013. Following from this, the Magellan system has been used to successfully undertake a variety of complex aortic procedures, including fenestrated/branched endovascular aortic repair, embolization, and angioplasty.
De la Garza-Ramos, Rafael; Nakhla, Jonathan; Gelfand, Yaroslav; Echt, Murray; Scoco, Aleka N; Kinon, Merritt D; Yassari, Reza
2018-03-01
To identify predictive factors for critical care unit-level complications (CCU complication) after long-segment fusion procedures for adult spinal deformity (ASD). The American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) database [2010-2014] was reviewed. Only adult patients who underwent fusion of 7 or more spinal levels for ASD were included. CCU complications included intraoperative arrest/infarction, ventilation >48 hours, pulmonary embolism, renal failure requiring dialysis, cardiac arrest, myocardial infarction, unplanned intubation, septic shock, stroke, coma, or new neurological deficit. A stepwise multivariate regression was used to identify independent predictors of CCU complications. Among 826 patients, the rate of CCU complications was 6.4%. On multivariate regression analysis, dependent functional status (P=0.004), combined approach (P=0.023), age (P=0.044), diabetes (P=0.048), and surgery for over 8 hours (P=0.080) were significantly associated with complication development. A simple scoring system was developed to predict complications with 0 points for patients aged <50, 1 point for patients between 50-70, 2 points for patients 70 or over, 1 point for diabetes, 2 points dependent functional status, 1 point for combined approach, and 1 point for surgery over 8 hours. The rate of CCU complications was 0.7%, 3.2%, 9.0%, and 12.6% for patients with 0, 1, 2, and 3+ points, respectively (P<0.001). The findings in this study suggest that older patients, patients with diabetes, patients who depend on others for activities of daily living, and patients who undergo combined approaches or surgery for over 8 hours may be at a significantly increased risk of developing a CCU-level complication after ASD surgery.
Shin, Ho-Jin; Choi, Yun-Mi; Kim, Hye-Jin; Lee, Sun-Jae; Yoon, Seok-Hyun; Kim, Kyung-Hoon
2014-12-01
Lumbar chemical sympathectomy has been performed using fluoroscopic guidance for needle positioning. An 84 year old woman with atherosclerosis obliterans was referred to the pain clinic for intractable cold allodynia of her right foot. A thermogram showed decreased temperature of both feet compared with temperatures above both ankles. The patient agreed to undergo lumbar chemical sympathectomy using fluoroscopy after being informed of the associated risks of nerve injury, hemorrhage, infection, transient back pain, and transient hypotension. During the procedure and three hours afterward, no abnormal signs or symptoms were found except an increase in right leg temperature. The patient was ambulatory after the procedure. However, one day after undergoing lumbar chemical sympathectomy, she visited our emergency department for abdominal discomfort and postural dizziness. Her blood pressure was 80/50 mmHg, and flank tenderness was noted. Retroperitoneal hemorrhage from the second right lumbar segmental artery was shown on computed tomography and angiography. Vital signs were stabilized immediately after embolization into the right lumbar segmental artery. Copyright © 2014 Elsevier Inc. All rights reserved.
Liver hanging maneuver for right hemiliver in situ donation--anatomical considerations.
Trotovsek, B; Gadzijev, E M; Ravnik, D; Hribernik, M
2006-01-01
An anatomical study was carried out to evaluate the safety of the liver hanging maneuver for the right hemiliver in living donor and in situ splitting transplantation. During this procedure a 4-6 cm blind dissection is performed between the inferior vena cava and the liver. Short subhepatic veins entering the inferior vena cava from segments 1 and 9 could be torn with consequent hemorrhage. One hundred corrosive casts of livers were evaluated to establish the position and diameter of short subhepatic veins and the inferior right hepatic vein. The average distance from the right border of the inferior vena cava to the opening of segment 1 veins was 16.7+/-3.4 mm and to the entrance of segment 9 veins was 5.0+/-0.5 mm. The width of the narrowest point on the route of blind dissection was determined, with the average value being 8.7+/-2.3 mm (range 2-15 mm). The results show that the liver hanging maneuver is a safe procedure. A proposed route of dissection minimizes the risk of disrupting short subhepatic veins (7%).
Automatic colonic lesion detection and tracking in endoscopic videos
NASA Astrophysics Data System (ADS)
Li, Wenjing; Gustafsson, Ulf; A-Rahim, Yoursif
2011-03-01
The biology of colorectal cancer offers an opportunity for both early detection and prevention. Compared with other imaging modalities, optical colonoscopy is the procedure of choice for simultaneous detection and removal of colonic polyps. Computer assisted screening makes it possible to assist physicians and potentially improve the accuracy of the diagnostic decision during the exam. This paper presents an unsupervised method to detect and track colonic lesions in endoscopic videos. The aim of the lesion screening and tracking is to facilitate detection of polyps and abnormal mucosa in real time as the physician is performing the procedure. For colonic lesion detection, the conventional marker controlled watershed based segmentation is used to segment the colonic lesions, followed by an adaptive ellipse fitting strategy to further validate the shape. For colonic lesion tracking, a mean shift tracker with background modeling is used to track the target region from the detection phase. The approach has been tested on colonoscopy videos acquired during regular colonoscopic procedures and demonstrated promising results.
Imagama, Shiro; Ito, Zenya; Wakao, Norimitsu; Ando, Kei; Hirano, Kenichi; Tauchi, Ryoji; Muramoto, Akio; Matsui, Hiroki; Matsumoto, Tomohiro; Sakai, Yoshihito; Katayama, Yoshito; Matsuyama, Yukihiro; Ishiguro, Naoki
2016-10-01
Prospective clinical case series. To describe our surgical procedure and results for posterior correction and fusion with a hybrid approach using pedicle screws, hooks, and ultrahigh-molecular weight polyethylene tape with direct vertebral rotation (DVR) (the PSTH-DVR procedure) for treatment of adolescent idiopathic scoliosis (AIS) with satisfactory correction in the coronal and sagittal planes. Introduction of segmental pedicle screws in posterior surgery for AIS has facilitated good correction and fusion. However, procedures using only pedicle screws have risks during screw insertion, higher costs, and decreased postoperative thoracic kyphosis. We have obtained good outcomes compared with segmental pedicle screw fixation in surgery for AIS using a relatively simple operative procedure (PSTH-DVR) that uses fewer pedicle screws. The subjects were 30 consecutive patients with AIS who underwent the PSTH-DVR procedure and were followed for a minimum of 2 years. Preoperative flexibility, preoperative and postoperative Cobb angles, correction rates, loss of correction, thoracic kyphotic angles (T5-T12), coronal balance, sagittal balance, and shoulder balance were measured on plain radiographs. Rib hump, operation time, estimated blood loss, spinal cord monitoring findings, complications, and scoliosis research society (SRS)-22 scores were also examined. The mean preoperative curve of 58.0 degrees (range, 40-96 degrees) was corrected to a mean of 9.9 degrees postoperatively, and the correction rate was 83.6%. Fusion was obtained in all patients without loss of correction. In 10 cases with preoperative kyphosis angles (T5-T12) <10 degrees, the preoperative mean of 5.8 degrees improved to 20.2 degrees at the final follow-up. Rib hump and coronal, sagittal and shoulder balances were also improved, and good SRS-22 scores were achieved at final follow-up. The correction of deformity with PSTH-DVR is equivalent to that of all-pedicle screw constructs. The procedure gives favorable correction, is advantageous for kyphosis compared with segmental screw fixation, and uses the minimum number of pedicle screws. Therefore, the PSTH-DVR procedure may be useful for treatment of idiopathic scoliosis.
Feng, Yuan; Dong, Fenglin; Xia, Xiaolong; Hu, Chun-Hong; Fan, Qianmin; Hu, Yanle; Gao, Mingyuan; Mutic, Sasa
2017-07-01
Ultrasound (US) imaging has been widely used in breast tumor diagnosis and treatment intervention. Automatic delineation of the tumor is a crucial first step, especially for the computer-aided diagnosis (CAD) and US-guided breast procedure. However, the intrinsic properties of US images such as low contrast and blurry boundaries pose challenges to the automatic segmentation of the breast tumor. Therefore, the purpose of this study is to propose a segmentation algorithm that can contour the breast tumor in US images. To utilize the neighbor information of each pixel, a Hausdorff distance based fuzzy c-means (FCM) method was adopted. The size of the neighbor region was adaptively updated by comparing the mutual information between them. The objective function of the clustering process was updated by a combination of Euclid distance and the adaptively calculated Hausdorff distance. Segmentation results were evaluated by comparing with three experts' manual segmentations. The results were also compared with a kernel-induced distance based FCM with spatial constraints, the method without adaptive region selection, and conventional FCM. Results from segmenting 30 patient images showed the adaptive method had a value of sensitivity, specificity, Jaccard similarity, and Dice coefficient of 93.60 ± 5.33%, 97.83 ± 2.17%, 86.38 ± 5.80%, and 92.58 ± 3.68%, respectively. The region-based metrics of average symmetric surface distance (ASSD), root mean square symmetric distance (RMSD), and maximum symmetric surface distance (MSSD) were 0.03 ± 0.04 mm, 0.04 ± 0.03 mm, and 1.18 ± 1.01 mm, respectively. All the metrics except sensitivity were better than that of the non-adaptive algorithm and the conventional FCM. Only three region-based metrics were better than that of the kernel-induced distance based FCM with spatial constraints. Inclusion of the pixel neighbor information adaptively in segmenting US images improved the segmentation performance. The results demonstrate the potential application of the method in breast tumor CAD and other US-guided procedures. © 2017 American Association of Physicists in Medicine.
Mester, Petru; Bouvaist, Helene; Delarche, Nicolas; Bouisset, Frédéric; Abdellaoui, Mohamed; Petiteau, Pierre-Yves; Dubreuil, Olivier; Boueri, Ziad; Chettibi, Mohamed; Souteyrand, Géraud; Madiot, Hend; Belle, Loic
2017-07-20
The aim of this study was to ascertain whether a minimalist immediate mechanical intervention (MIMI) aiming to restore an optimal Thrombolysis In Myocardial Infarction (TIMI) flow in the culprit artery, followed ≥7 days later by a second percutaneous coronary intervention with intentional stenting, is safe in patients with ST-segment elevation myocardial infarction and large thrombotic burden. SUPER-MIMI was a prospective, observational trial conducted between January 2014 and April 2015 in 14 French centres. A total of 155 patients were enrolled. The pharmacological therapy was left to the operator's discretion. Eighty-one patients (52.3%) had glycoprotein IIb/IIIa inhibitors (GPI) initiated before the end of the first procedure. The median (interquartile range [IQR]) delay between the two procedures was eight (seven to 12) days. Infarct-related artery reocclusion between the two procedures (primary endpoint) occurred in two patients (1.3%), neither of whom received GPI treatment. TIMI flow was maintained or improved between the end of the first procedure and the beginning of the second procedure in all patients. Thrombotic burden and stenosis severity diminished significantly between the two procedures. Stents were ultimately implanted in 97 patients (62.6%). Deferred stenting (≥7 days) in patients with a high thrombus burden was safe on a background of GPI therapy.
Flexible methods for segmentation evaluation: results from CT-based luggage screening.
Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry
2014-01-01
Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms' behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms.
Jersey number detection in sports video for athlete identification
NASA Astrophysics Data System (ADS)
Ye, Qixiang; Huang, Qingming; Jiang, Shuqiang; Liu, Yang; Gao, Wen
2005-07-01
Athlete identification is important for sport video content analysis since users often care about the video clips with their preferred athletes. In this paper, we propose a method for athlete identification by combing the segmentation, tracking and recognition procedures into a coarse-to-fine scheme for jersey number (digital characters on sport shirt) detection. Firstly, image segmentation is employed to separate the jersey number regions with its background. And size/pipe-like attributes of digital characters are used to filter out candidates. Then, a K-NN (K nearest neighbor) classifier is employed to classify a candidate into a digit in "0-9" or negative. In the recognition procedure, we use the Zernike moment features, which are invariant to rotation and scale for digital shape recognition. Synthetic training samples with different fonts are used to represent the pattern of digital characters with non-rigid deformation. Once a character candidate is detected, a SSD (smallest square distance)-based tracking procedure is started. The recognition procedure is performed every several frames in the tracking process. After tracking tens of frames, the overall recognition results are combined to determine if a candidate is a true jersey number or not by a voting procedure. Experiments on several types of sports video shows encouraging result.
Gao, Shan; van 't Klooster, Ronald; Brandts, Anne; Roes, Stijntje D; Alizadeh Dehnavi, Reza; de Roos, Albert; Westenberg, Jos J M; van der Geest, Rob J
2017-01-01
To develop and evaluate a method that can fully automatically identify the vessel wall boundaries and quantify the wall thickness for both common carotid artery (CCA) and descending aorta (DAO) from axial magnetic resonance (MR) images. 3T MRI data acquired with T 1 -weighted gradient-echo black-blood imaging sequence from carotid (39 subjects) and aorta (39 subjects) were used to develop and test the algorithm. The vessel wall segmentation was achieved by respectively fitting a 3D cylindrical B-spline surface to the boundaries of lumen and outer wall. The tube-fitting was based on the edge detection performed on the signal intensity (SI) profile along the surface normal. To achieve a fully automated process, Hough Transform (HT) was developed to estimate the lumen centerline and radii for the target vessel. Using the outputs of HT, a tube model for lumen segmentation was initialized and deformed to fit the image data. Finally, lumen segmentation was dilated to initiate the adaptation procedure of outer wall tube. The algorithm was validated by determining: 1) its performance against manual tracing; 2) its interscan reproducibility in quantifying vessel wall thickness (VWT); 3) its capability of detecting VWT difference in hypertensive patients compared with healthy controls. Statistical analysis including Bland-Altman analysis, t-test, and sample size calculation were performed for the purpose of algorithm evaluation. The mean distance between the manual and automatically detected lumen/outer wall contours was 0.00 ± 0.23/0.09 ± 0.21 mm for CCA and 0.12 ± 0.24/0.14 ± 0.35 mm for DAO. No significant difference was observed between the interscan VWT assessment using automated segmentation for both CCA (P = 0.19) and DAO (P = 0.94). Both manual and automated segmentation detected significantly higher carotid (P = 0.016 and P = 0.005) and aortic (P < 0.001 and P = 0.021) wall thickness in the hypertensive patients. A reliable and reproducible pipeline for fully automatic vessel wall quantification was developed and validated on healthy volunteers as well as patients with increased vessel wall thickness. This method holds promise for helping in efficient image interpretation for large-scale cohort studies. 4 J. Magn. Reson. Imaging 2017;45:215-228. © 2016 International Society for Magnetic Resonance in Medicine.
3D geometric split-merge segmentation of brain MRI datasets.
Marras, Ioannis; Nikolaidis, Nikolaos; Pitas, Ioannis
2014-05-01
In this paper, a novel method for MRI volume segmentation based on region adaptive splitting and merging is proposed. The method, called Adaptive Geometric Split Merge (AGSM) segmentation, aims at finding complex geometrical shapes that consist of homogeneous geometrical 3D regions. In each volume splitting step, several splitting strategies are examined and the most appropriate is activated. A way to find the maximal homogeneity axis of the volume is also introduced. Along this axis, the volume splitting technique divides the entire volume in a number of large homogeneous 3D regions, while at the same time, it defines more clearly small homogeneous regions within the volume in such a way that they have greater probabilities of survival at the subsequent merging step. Region merging criteria are proposed to this end. The presented segmentation method has been applied to brain MRI medical datasets to provide segmentation results when each voxel is composed of one tissue type (hard segmentation). The volume splitting procedure does not require training data, while it demonstrates improved segmentation performance in noisy brain MRI datasets, when compared to the state of the art methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Restoring Wood-Rich Hotspots in Mountain Stream Networks
NASA Astrophysics Data System (ADS)
Wohl, E.; Scott, D.
2016-12-01
Mountain streams commonly include substantial longitudinal variability in valley and channel geometry, alternating repeatedly between steep, narrow and relatively wide, low gradient segments. Segments that are wider and lower gradient than neighboring steeper sections are hotspots with respect to: retention of large wood (LW) and finer sediment and organic matter; uptake of nutrients; and biomass and biodiversity of aquatic and riparian organisms. These segments are also more likely to be transport-limited with respect to floodplain and instream LW. Management designed to protect and restore riverine LW and the physical and ecological processes facilitated by the presence of LW is likely to be most effective if focused on relatively low-gradient stream segments. These segments can be identified using a simple, reach-scale gradient analysis based on high-resolution DEMs, with field visits to identify factors that potentially limit or facilitate LW recruitment and retention, such as forest disturbance history or land use. Drawing on field data from the western US, this presentation outlines a procedure for mapping relatively low-gradient segments in a stream network and for identifying those segments where LW reintroduction or retention is most likely to balance maximizing environmental benefits derived from the presence of LW while minimizing hazards associated with LW.
Ma, Ching-Hou; Tu, Yuan-Kun; Yeh, Jih-Hsi; Yang, Shih-Chieh; Wu, Chin-Hsien
2011-09-01
The tibial segmental fractures usually follow high-energy trauma and are often associated with many complications. We designed a two-stage protocol for these complex injuries. The aim of this study was to assess the outcome of tibial segmental fractures treated according to this protocol. A prospective series of 25 consecutive segmental tibial fractures were treated using a two-stage procedure. In the first stage, a low-profile locking plate was applied as an external fixator to temporarily immobilize the fractures after anatomic reduction had been achieved followed by soft-tissue reconstruction. The second stage involved definitive internal fixation with a locking plate using a minimally invasive percutaneous plate osteosynthesis technique. The median follow-up was 32 months (range, 20-44 months). All fractures achieved union. The median time for the proximal fracture union was 23 weeks (range, 12-30 weeks) and that for distal fracture union was 27 weeks (range, 12-46 weeks; p = 0.08). Functional results were excellent in 21 patients and good in 4 patients. There were three cases of delayed union of distal fracture. Valgus malunion >5 degrees occurred in two patients, and length discrepancy >1 cm was observed in two patients. Pin tract infection occurred in three patients. Use of the two-stage procedure for treatment of segmental tibial fractures is recommended. Surgeons can achieve good reduction with stable temporary fixation, soft-tissue reconstruction, ease of subsequent definitive fixation, and high union rates. Our patients obtained excellent knee and ankle joint motion, good functional outcomes, and a comfortable clinical course.
Segmentation of British Sign Language (BSL): mind the gap!
Orfanidou, Eleni; McQueen, James M; Adam, Robert; Morgan, Gary
2015-01-01
This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were preceded by nonsense signs which were produced in either the target location or another location (with a small or large transition). Half of the transitions were within the same major body area (e.g., head) and half were across body areas (e.g., chest to hand). Deaf adult BSL users (a group of natives and early learners, and a group of late learners) spotted target signs best when there was a minimal transition and worst when there was a large transition. When location changes were present, both groups performed better when transitions were to a different body area than when they were within the same area. These findings suggest that transitions do not provide explicit sign-boundary cues in a modality-specific fashion. Instead, we argue that smaller transitions help recognition in a modality-general way by limiting lexical search to signs within location neighbourhoods, and that transitions across body areas also aid segmentation in a modality-general way, by providing a phonotactic cue to a sign boundary. We propose that sign segmentation is based on modality-general procedures which are core language-processing mechanisms.
Echegaray, Sebastian; Nair, Viswam; Kadoch, Michael; Leung, Ann; Rubin, Daniel; Gevaert, Olivier; Napel, Sandy
2016-12-01
Quantitative imaging approaches compute features within images' regions of interest. Segmentation is rarely completely automatic, requiring time-consuming editing by experts. We propose a new paradigm, called "digital biopsy," that allows for the collection of intensity- and texture-based features from these regions at least 1 order of magnitude faster than the current manual or semiautomated methods. A radiologist reviewed automated segmentations of lung nodules from 100 preoperative volume computed tomography scans of patients with non-small cell lung cancer, and manually adjusted the nodule boundaries in each section, to be used as a reference standard, requiring up to 45 minutes per nodule. We also asked a different expert to generate a digital biopsy for each patient using a paintbrush tool to paint a contiguous region of each tumor over multiple cross-sections, a procedure that required an average of <3 minutes per nodule. We simulated additional digital biopsies using morphological procedures. Finally, we compared the features extracted from these digital biopsies with our reference standard using intraclass correlation coefficient (ICC) to characterize robustness. Comparing the reference standard segmentations to our digital biopsies, we found that 84/94 features had an ICC >0.7; comparing erosions and dilations, using a sphere of 1.5-mm radius, of our digital biopsies to the reference standard segmentations resulted in 41/94 and 53/94 features, respectively, with ICCs >0.7. We conclude that many intensity- and texture-based features remain consistent between the reference standard and our method while substantially reducing the amount of operator time required.
FEL Trajectory Analysis for the VISA Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuhn, Heinz-Dieter
1998-10-06
The Visual to Infrared SASE Amplifier (VISA) [1] FEL is designed to achieve saturation at radiation wavelengths between 800 and 600 nm with a 4-m pure permanent magnet undulator. The undulator comprises four 99-cm segments each of which has four FODO focusing cells superposed on the beam by means of permanent magnets in the gap alongside the beam. Each segment will also have two beam position monitors and two sets of x-y dipole correctors. The trajectory walk-off in each segment will be reduced to a value smaller than the rms beam radius by means of magnet sorting, precise fabrication, andmore » post-fabrication shimming and trim magnets. However, this leaves possible inter-segment alignment errors. A trajectory analysis code has been used in combination with the FRED3D [2] FEL code to simulate the effect of the shimming procedure and segment alignment errors on the electron beam trajectory and to determine the sensitivity of the FEL gain process to trajectory errors. The paper describes the technique used to establish tolerances for the segment alignment.« less
Chain-Wise Generalization of Road Networks Using Model Selection
NASA Astrophysics Data System (ADS)
Bulatov, D.; Wenzel, S.; Häufel, G.; Meidow, J.
2017-05-01
Streets are essential entities of urban terrain and their automatized extraction from airborne sensor data is cumbersome because of a complex interplay of geometric, topological and semantic aspects. Given a binary image, representing the road class, centerlines of road segments are extracted by means of skeletonization. The focus of this paper lies in a well-reasoned representation of these segments by means of geometric primitives, such as straight line segments as well as circle and ellipse arcs. We propose the fusion of raw segments based on similarity criteria; the output of this process are the so-called chains which better match to the intuitive perception of what a street is. Further, we propose a two-step approach for chain-wise generalization. First, the chain is pre-segmented using
Hwang, Shin; Park, Gil-Chun; Ha, Tae-Yong; Ko, Gi-Young; Gwon, Dong-Il; Choi, Young-Il; Song, Gi-Won; Lee, Sung-Gyu
2012-05-01
Liver resection can result in various types of bile duct injuries but their treatment is usually difficult and often leads to intractable clinical course. We present an unusual case of hepatic segment III duct (B3) injury, which occurred after left medial sectionectomy for large hepatocellular carcinoma and was incidentally detected 1 week later due to bile leak. Since the pattern of this B3 injury was not adequate for operative biliary reconstruction, atrophy induction of the involved hepatic parenchyma was attempted. This treatment consisted of embolization of the segment III portal branch to inhibit bile production, induction of heavy adhesion at the bile leak site and clamping of the percutaneous transhepatic biliary drainage (PTBD) tube to accelerate segment III atrophy. This entire procedure, from liver resection to PTBD tube removal took 4 months. This patient has shown no other complication or tumor recurrence for 4 years to date. These findings suggest that percutaneous segmental portal vein embolization, followed by intentional clamping of external biliary drainage, can effectively control intractable bile leak from segmental bile duct injury.
Segmentation of optic disc and optic cup in retinal fundus images using shape regression.
Sedai, Suman; Roy, Pallab K; Mahapatra, Dwarikanath; Garnavi, Rahil
2016-08-01
Glaucoma is one of the leading cause of blindness. The manual examination of optic cup and disc is a standard procedure used for detecting glaucoma. This paper presents a fully automatic regression based method which accurately segments optic cup and disc in retinal colour fundus image. First, we roughly segment optic disc using circular hough transform. The approximated optic disc is then used to compute the initial optic disc and cup shapes. We propose a robust and efficient cascaded shape regression method which iteratively learns the final shape of the optic cup and disc from a given initial shape. Gradient boosted regression trees are employed to learn each regressor in the cascade. A novel data augmentation approach is proposed to improve the regressors performance by generating synthetic training data. The proposed optic cup and disc segmentation method is applied on an image set of 50 patients and demonstrate high segmentation accuracy for optic cup and disc with dice metric of 0.95 and 0.85 respectively. Comparative study shows that our proposed method outperforms state of the art optic cup and disc segmentation methods.
ORNL Interim Progress Report on Hydride Reorientation CIRFT Tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jy-An John; Yan, Yong; Wang, Hong
A systematic study of H. B. Robinson (HBR) high burnup spent nuclear fuel (SNF) vibration integrity was performed in Phase I project under simulated transportation environments, using the Cyclic Integrated Reversible-Bending Fatigue Tester (CIRFT) hot cell testing technology developed at Oak Ridge National Laboratory in 2013–14. The data analysis on the as-irradiated HBR SNF rods demonstrated that the load amplitude is the dominant factor that controls the fatigue life of bending rods. However, previous studies have shown that the hydrogen content and hydride morphology has an important effect on zirconium alloy mechanical properties. To address the effect of radial hydridesmore » in SNF rods, in Phase II a test procedure was developed to simulate the effects of elevated temperatures, pressures, and stresses during transfer-drying operations. Pressurized and sealed fuel segments were heated to the target temperature for a preset hold time and slow-cooled at a controlled rate. The procedure was applied to both non-irradiated/prehydrided and high-burnup Zircaloy-4 fueled cladding segments using the Nuclear Regulatory Commission-recommended 400°C maximum temperature limit at various cooling rates. Before testing high-burnup cladding, four out-of-cell tests were conducted to optimize the hydride reorientation (R) test condition with pre-hydride Zircaloy-4 cladding, which has the same geometry as the high burnup fuel samples. Test HR-HBR#1 was conducted at the maximum hoop stress of 145 MPa, at a 400°C maximum temperature and a 5°C/h cooling rate. On the other hand, thermal cycling was performed for tests HR-HBR#2, HR-HBR#3, and HR-HBR#4 to generate more radial hydrides. It is clear that thermal cycling increases the ratio of the radial hydride to circumferential hydrides. The internal pressure also has a significant effect on the radial hydride morphology. This report describes a procedure and experimental results of the four out-of-cell hydride reorientation tests of hydrided Zircaloy-4 cladding, which served as a guideline to prepare in-cell hydride reorientation samples with high burnup HBR fuel segments. This report also provides the Phase II CIRFT test data for the hydride reorientation irradiated samples. The variations in fatigue life are provided in terms of moment, equivalent stress, curvature, and equivalent strain for the tested SNFs. The CIRFT results appear to indicate that hydride reoriented treatment (HRT) have a negative effect on fatigue life, in addition to hydride reorientation effect. For HR4 specimen that had no pressurization procedure applied, the thermal annealing treatment alone showed a negative impact on the fatigue life compared to the HBR rod.« less
From Phenomena to Objects: Segmentation of Fuzzy Objects and its Application to Oceanic Eddies
NASA Astrophysics Data System (ADS)
Wu, Qingling
A challenging image analysis problem that has received limited attention to date is the isolation of fuzzy objects---i.e. those with inherently indeterminate boundaries---from continuous field data. This dissertation seeks to bridge the gap between, on the one hand, the recognized need for Object-Based Image Analysis of fuzzy remotely sensed features, and on the other, the optimization of existing image segmentation techniques for the extraction of more discretely bounded features. Using mesoscale oceanic eddies as a case study of a fuzzy object class evident in Sea Surface Height Anomaly (SSHA) imagery, the dissertation demonstrates firstly, that the widely used region-growing and watershed segmentation techniques can be optimized and made comparable in the absence of ground truth data using the principle of parsimony. However, they both have significant shortcomings, with the region growing procedure creating contour polygons that do not follow the shape of eddies while the watershed technique frequently subdivides eddies or groups together separate eddy objects. Secondly, it was determined that these problems can be remedied by using a novel Non-Euclidian Voronoi (NEV) tessellation technique. NEV is effective in isolating the extrema associated with eddies in SSHA data while using a non-Euclidian cost-distance based procedure (based on cumulative gradients in ocean height) to define the boundaries between fuzzy objects. Using this procedure as the first stage in isolating candidate eddy objects, a novel "region-shrinking" multicriteria eddy identification algorithm was developed that includes consideration of shape and vorticity. Eddies identified by this region-shrinking technique compare favorably with those identified by existing techniques, while simplifying and improving existing automated eddy detection algorithms. However, it also tends to find a larger number of eddies as a result of its ability to separate what other techniques identify as connected eddies. The research presented here is of significance not only to eddy research in oceanography, but also to other areas of Earth System Science for which the automated detection of features lacking rigid boundary definitions is of importance.
Fully convolutional neural networks for polyp segmentation in colonoscopy
NASA Astrophysics Data System (ADS)
Brandao, Patrick; Mazomenos, Evangelos; Ciuti, Gastone; Caliò, Renato; Bianchi, Federico; Menciassi, Arianna; Dario, Paolo; Koulaouzidis, Anastasios; Arezzo, Alberto; Stoyanov, Danail
2017-03-01
Colorectal cancer (CRC) is one of the most common and deadliest forms of cancer, accounting for nearly 10% of all forms of cancer in the world. Even though colonoscopy is considered the most effective method for screening and diagnosis, the success of the procedure is highly dependent on the operator skills and level of hand-eye coordination. In this work, we propose to adapt fully convolution neural networks (FCN), to identify and segment polyps in colonoscopy images. We converted three established networks into a fully convolution architecture and fine-tuned their learned representations to the polyp segmentation task. We validate our framework on the 2015 MICCAI polyp detection challenge dataset, surpassing the state-of-the-art in automated polyp detection. Our method obtained high segmentation accuracy and a detection precision and recall of 73.61% and 86.31%, respectively.
Shape-Driven 3D Segmentation Using Spherical Wavelets
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2013-01-01
This paper presents a novel active surface segmentation algorithm using a multiscale shape representation and prior. We define a parametric model of a surface using spherical wavelet functions and learn a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior in the segmentation framework. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to the segmentation of brain caudate nucleus, of interest in the study of schizophrenia. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm by capturing finer shape details. PMID:17354875
Automatic pelvis segmentation from x-ray images of a mouse model
NASA Astrophysics Data System (ADS)
Al Okashi, Omar M.; Du, Hongbo; Al-Assam, Hisham
2017-05-01
The automatic detection and quantification of skeletal structures has a variety of different applications for biological research. Accurate segmentation of the pelvis from X-ray images of mice in a high-throughput project such as the Mouse Genomes Project not only saves time and cost but also helps achieving an unbiased quantitative analysis within the phenotyping pipeline. This paper proposes an automatic solution for pelvis segmentation based on structural and orientation properties of the pelvis in X-ray images. The solution consists of three stages including pre-processing image to extract pelvis area, initial pelvis mask preparation and final pelvis segmentation. Experimental results on a set of 100 X-ray images showed consistent performance of the algorithm. The automated solution overcomes the weaknesses of a manual annotation procedure where intra- and inter-observer variations cannot be avoided.
Bio-medical flow sensor. [intrvenous procedures
NASA Technical Reports Server (NTRS)
Winkler, H. E. (Inventor)
1981-01-01
A bio-medical flow sensor including a packageable unit of a bottle, tubing and hypodermic needle which can be pre-sterilized and is disposable. The tubing has spaced apart tubular metal segments. The temperature of the metal segments and fluid flow therein is sensed by thermistors and at a downstream location heat is input by a resistor to the metal segment by a control electronics. The fluids flow and the electrical power required for the resisto to maintain a constant temperature differential between the tubular metal segments is a measurable function of fluid flow through the tubing. The differential temperature measurement is made in a control electronics and also can be used to control a flow control valve or pump on the tubing to maintain a constant flow in the tubing and to shut off the tubing when air is present in the tubing.
Automated brain tumor segmentation using spatial accuracy-weighted hidden Markov Random Field.
Nie, Jingxin; Xue, Zhong; Liu, Tianming; Young, Geoffrey S; Setayesh, Kian; Guo, Lei; Wong, Stephen T C
2009-09-01
A variety of algorithms have been proposed for brain tumor segmentation from multi-channel sequences, however, most of them require isotropic or pseudo-isotropic resolution of the MR images. Although co-registration and interpolation of low-resolution sequences, such as T2-weighted images, onto the space of the high-resolution image, such as T1-weighted image, can be performed prior to the segmentation, the results are usually limited by partial volume effects due to interpolation of low-resolution images. To improve the quality of tumor segmentation in clinical applications where low-resolution sequences are commonly used together with high-resolution images, we propose the algorithm based on Spatial accuracy-weighted Hidden Markov random field and Expectation maximization (SHE) approach for both automated tumor and enhanced-tumor segmentation. SHE incorporates the spatial interpolation accuracy of low-resolution images into the optimization procedure of the Hidden Markov Random Field (HMRF) to segment tumor using multi-channel MR images with different resolutions, e.g., high-resolution T1-weighted and low-resolution T2-weighted images. In experiments, we evaluated this algorithm using a set of simulated multi-channel brain MR images with known ground-truth tissue segmentation and also applied it to a dataset of MR images obtained during clinical trials of brain tumor chemotherapy. The results show that more accurate tumor segmentation results can be obtained by comparing with conventional multi-channel segmentation algorithms.
Energy flow during Olympic weight lifting.
Garhammer, J
1982-01-01
Data obtained from 16-mm film of world caliber Olympic weight lifters performing at major competitions were analyzed to study energy changes during body segment and barbell movements, energy transfer to the barbell, and energy transfer between segments during the lifting movements contested. Determination of barbell and body segment kinematics and use of rigid-link modeling and energy flow techniques permitted the calculation of segment energy content and energy transfer between segments. Energy generation within and transfer to and from segments were determined at 0.04-s intervals by comparing mechanical energy changes of a segment with energy transfer at the joints, calculated from the scalar product of net joint force with absolute joint velocity, and the product of net joint torque due to muscular activity with absolute segment angular velocity. The results provided a detailed understanding of the magnitude and temporal input of energy from dominant muscle groups during a lift. This information also provided a means of quantifying lifting technique. Comparison of segment energy changes determined by the two methods were satisfactory but could likely be improved by employing more sophisticated data smoothing methods. The procedures used in this study could easily be applied to weight training and rehabilitative exercises to help determine their efficacy in producing desired results or to ergonomic situations where a more detailed understanding of the demands made on the body during lifting tasks would be useful.
Semantic Segmentation of Building Elements Using Point Cloud Hashing
NASA Astrophysics Data System (ADS)
Chizhova, M.; Gurianov, A.; Hess, M.; Luhmann, T.; Brunn, A.; Stilla, U.
2018-05-01
For the interpretation of point clouds, the semantic definition of extracted segments from point clouds or images is a common problem. Usually, the semantic of geometrical pre-segmented point cloud elements are determined using probabilistic networks and scene databases. The proposed semantic segmentation method is based on the psychological human interpretation of geometric objects, especially on fundamental rules of primary comprehension. Starting from these rules the buildings could be quite well and simply classified by a human operator (e.g. architect) into different building types and structural elements (dome, nave, transept etc.), including particular building parts which are visually detected. The key part of the procedure is a novel method based on hashing where point cloud projections are transformed into binary pixel representations. A segmentation approach released on the example of classical Orthodox churches is suitable for other buildings and objects characterized through a particular typology in its construction (e.g. industrial objects in standardized enviroments with strict component design allowing clear semantic modelling).
Barbopoulos, I; Johansson, L-O
2017-08-01
This data article offers a detailed description of analyses pertaining to the development of the Consumer Motivation Scale (CMS), from item generation and the extraction of factors, to confirmation of the factor structure and validation of the emergent dimensions. The established goal structure - consisting of the sub-goals Value for Money, Quality, Safety, Stimulation, Comfort, Ethics, and Social Acceptance - is shown to be related to a variety of consumption behaviors in different contexts and for different products, and should thereby prove useful in standard marketing research, as well as in the development of tailored marketing strategies, and the segmentation of consumer groups, settings, brands, and products.
Automatic quantitative analysis of in-stent restenosis using FD-OCT in vivo intra-arterial imaging.
Mandelias, Kostas; Tsantis, Stavros; Spiliopoulos, Stavros; Katsakiori, Paraskevi F; Karnabatidis, Dimitris; Nikiforidis, George C; Kagadis, George C
2013-06-01
A new segmentation technique is implemented for automatic lumen area extraction and stent strut detection in intravascular optical coherence tomography (OCT) images for the purpose of quantitative analysis of in-stent restenosis (ISR). In addition, a user-friendly graphical user interface (GUI) is developed based on the employed algorithm toward clinical use. Four clinical datasets of frequency-domain OCT scans of the human femoral artery were analyzed. First, a segmentation method based on fuzzy C means (FCM) clustering and wavelet transform (WT) was applied toward inner luminal contour extraction. Subsequently, stent strut positions were detected by utilizing metrics derived from the local maxima of the wavelet transform into the FCM membership function. The inner lumen contour and the position of stent strut were extracted with high precision. Compared to manual segmentation by an expert physician, the automatic lumen contour delineation had an average overlap value of 0.917 ± 0.065 for all OCT images included in the study. The strut detection procedure achieved an overall accuracy of 93.80% and successfully identified 9.57 ± 0.5 struts for every OCT image. Processing time was confined to approximately 2.5 s per OCT frame. A new fast and robust automatic segmentation technique combining FCM and WT for lumen border extraction and strut detection in intravascular OCT images was designed and implemented. The proposed algorithm integrated in a GUI represents a step forward toward the employment of automated quantitative analysis of ISR in clinical practice.
NASA Astrophysics Data System (ADS)
Romînu, Roxana Otilia; Sinescu, Cosmin; Romînu, Mihai; Negrutiu, Meda; Laissue, Philippe; Mihali, Sorin; Cuc, Lavinia; Hughes, Michael; Bradu, Adrian; Podoleanu, Adrian
2008-09-01
Bonding has become a routine procedure in several dental specialties - from prosthodontics to conservative dentistry and even orthodontics. In many of these fields it is important to be able to investigate the bonded interfaces to assess their quality. All currently employed investigative methods are invasive, meaning that samples are destroyed in the testing procedure and cannot be used again. We have investigated the interface between human enamel and bonded ceramic brackets non-invasively, introducing a combination of new investigative methods - optical coherence tomography (OCT) and confocal microscopy (CM). Brackets were conventionally bonded on conditioned buccal surfaces of teeth The bonding was assessed using these methods. Three dimensional reconstructions of the detected material defects were developed using manual and semi-automatic segmentation. The results clearly prove that OCT and CM are useful in orthodontic bonding investigations.
Security Police Career Ladders AFSCs 811X0, 811X2, and 811X2A.
1984-11-01
MONITORS (GRP658) PERCENT MEMBERS PERFORMING TASKS (N=186) J424 PERFORM SPCDS OPERATOR REACTIONS TO SENSOR ALARM, LINE FAULT, OR UNIQUE LINE FAULT...MESSAGES 96 J426 PERFORM SPCDS VERIFICATION PROCEDURES 96 J423 PERFORM SMALL PERMANENT COMMUNICATIONS DISPLAY SEGMENT ( SPCDS ) SHUT-DOWN PROCEDURES 92 J425...PERFORM SPCDS START-UP PROCEDURES 91 J419 PERFORM BISS OPERATOR REACTION TO PRIME POWER LOSS OR SEVERE WEATHER WARNINGS 91 E192 MAKE ENTRIES ON AF
Interactive Tooth Separation from Dental Model Using Segmentation Field
2016-01-01
Tooth segmentation on dental model is an essential step of computer-aided-design systems for orthodontic virtual treatment planning. However, fast and accurate identifying cutting boundary to separate teeth from dental model still remains a challenge, due to various geometrical shapes of teeth, complex tooth arrangements, different dental model qualities, and varying degrees of crowding problems. Most segmentation approaches presented before are not able to achieve a balance between fine segmentation results and simple operating procedures with less time consumption. In this article, we present a novel, effective and efficient framework that achieves tooth segmentation based on a segmentation field, which is solved by a linear system defined by a discrete Laplace-Beltrami operator with Dirichlet boundary conditions. A set of contour lines are sampled from the smooth scalar field, and candidate cutting boundaries can be detected from concave regions with large variations of field data. The sensitivity to concave seams of the segmentation field facilitates effective tooth partition, as well as avoids obtaining appropriate curvature threshold value, which is unreliable in some case. Our tooth segmentation algorithm is robust to dental models with low quality, as well as is effective to dental models with different levels of crowding problems. The experiments, including segmentation tests of varying dental models with different complexity, experiments on dental meshes with different modeling resolutions and surface noises and comparison between our method and the morphologic skeleton segmentation method are conducted, thus demonstrating the effectiveness of our method. PMID:27532266
Flexible methods for segmentation evaluation: Results from CT-based luggage screening
Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry
2017-01-01
BACKGROUND Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms’ behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. OBJECTIVE To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. METHODS We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. RESULTS Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. CONCLUSIONS Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms. PMID:24699346
Spinal anaesthesia with midazolam in the rat.
Bahar, M; Cohen, M L; Grinshpon, Y; Chanimov, M
1997-02-01
This study examined in an animal model whether intrathecal midazolam, alone or with fentanyl, can achieve anaesthesia sufficient for laparotomy, comparable to lidocaine. Effects on consciousness and whether anaesthesia was segmental were also examined. The haemodynamic and respiratory changes were compared with those of intrathecal lidocaine or intrathecal fentanyl alone. Sixty Wistar strain rats, with nylon catheters chronically implanted in the lumbar subarachnoid theca, were divided into six groups. Group 1 (n = 12) received 75 microL intrathecal lidocaine 2%. Group 2 (n = 12) received 75 microL intrathecal midazolam 0.1%, Group 3 (n = 12) received intrathecal 37.5 microL midazolam 0.1%, plus 37.5 microL fentanyl 0.005%. Group 4 (n = 12) received intrathecal 50 microL fentanyl 0.005%. Group 5 (n = 6) received 75 microL midazolam 0.1% iv. Group 6 (n = 6) received halothane 0.6% in oxygen by inhalation. Both groups that received intrathecal midazolam, alone or combined with fentanyl, developed effective segmental sensory and motor blockade of the hind limbs and abdominal wall, sufficient for a pain-free laparotomy procedure. Neither of these groups, unlike the group that received intrathecal lidocaine, developed a reduction in blood pressure or change in heart rate at the time of maximal sensory or motor blockade, nor were there changes in the arterial blood gases or respiratory rate. Midazolam, when injected intrathecally, produces reversible, segmental, spinally mediated antinociception, sufficient to provide balanced anaesthesia for abdominal surgery.
Arabidopsis phenotyping through Geometric Morphometrics.
Manacorda, Carlos A; Asurmendi, Sebastian
2018-06-18
Recently, much technical progress was achieved in the field of plant phenotyping. High-throughput platforms and the development of improved algorithms for rosette image segmentation make it now possible to extract shape and size parameters for genetic, physiological and environmental studies on a large scale. The development of low-cost phenotyping platforms and freeware resources make it possible to widely expand phenotypic analysis tools for Arabidopsis. However, objective descriptors of shape parameters that could be used independently of platform and segmentation software used are still lacking and shape descriptions still rely on ad hoc or even sometimes contradictory descriptors, which could make comparisons difficult and perhaps inaccurate. Modern geometric morphometrics is a family of methods in quantitative biology proposed to be the main source of data and analytical tools in the emerging field of phenomics studies. Based on the location of landmarks (corresponding points) over imaged specimens and by combining geometry, multivariate analysis and powerful statistical techniques, these tools offer the possibility to reproducibly and accurately account for shape variations amongst groups and measure them in shape distance units. Here, a particular scheme of landmarks placement on Arabidopsis rosette images is proposed to study shape variation in the case of viral infection processes. Shape differences between controls and infected plants are quantified throughout the infectious process and visualized. Quantitative comparisons between two unrelated ssRNA+ viruses are shown and reproducibility issues are assessed. Combined with the newest automated platforms and plant segmentation procedures, geometric morphometric tools could boost phenotypic features extraction and processing in an objective, reproducible manner.
Paroxysmal atrial fibrillation prediction method with shorter HRV sequences.
Boon, K H; Khalil-Hani, M; Malarvili, M B; Sia, C W
2016-10-01
This paper proposes a method that predicts the onset of paroxysmal atrial fibrillation (PAF), using heart rate variability (HRV) segments that are shorter than those applied in existing methods, while maintaining good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to stabilize (electrically) and prevent the onset of atrial arrhythmias with different pacing techniques. We investigate the effect of HRV features extracted from different lengths of HRV segments prior to PAF onset with the proposed PAF prediction method. The pre-processing stage of the predictor includes QRS detection, HRV quantification and ectopic beat correction. Time-domain, frequency-domain, non-linear and bispectrum features are then extracted from the quantified HRV. In the feature selection, the HRV feature set and classifier parameters are optimized simultaneously using an optimization procedure based on genetic algorithm (GA). Both full feature set and statistically significant feature subset are optimized by GA respectively. For the statistically significant feature subset, Mann-Whitney U test is used to filter non-statistical significance features that cannot pass the statistical test at 20% significant level. The final stage of our predictor is the classifier that is based on support vector machine (SVM). A 10-fold cross-validation is applied in performance evaluation, and the proposed method achieves 79.3% prediction accuracy using 15-minutes HRV segment. This accuracy is comparable to that achieved by existing methods that use 30-minutes HRV segments, most of which achieves accuracy of around 80%. More importantly, our method significantly outperforms those that applied segments shorter than 30 minutes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Christos, Kourouklas; Eleftheria, Papadimitriou; George, Tsaklidis; Vassilios, Karakostas
2018-06-01
The determination of strong earthquakes' recurrence time above a predefined magnitude, associated with specific fault segments, is an important component of seismic hazard assessment. The occurrence of these earthquakes is neither periodic nor completely random but often clustered in time. This fact in connection with their limited number, due to shortage of the available catalogs, inhibits a deterministic approach for recurrence time calculation, and for this reason, application of stochastic processes is required. In this study, recurrence time determination in the area of North Aegean Trough (NAT) is developed by the application of time-dependent stochastic models, introducing an elastic rebound motivated concept for individual fault segments located in the study area. For this purpose, all the available information on strong earthquakes (historical and instrumental) with M w ≥ 6.5 is compiled and examined for magnitude completeness. Two possible starting dates of the catalog are assumed with the same magnitude threshold, M w ≥ 6.5 and divided into five data sets, according to a new segmentation model for the study area. Three Brownian Passage Time (BPT) models with different levels of aperiodicity are applied and evaluated with the Anderson-Darling test for each segment in both catalog data where possible. The preferable models are then used in order to estimate the occurrence probabilities of M w ≥ 6.5 shocks on each segment of NAT for the next 10, 20, and 30 years since 01/01/2016. Uncertainties in probability calculations are also estimated using a Monte Carlo procedure. It must be mentioned that the provided results should be treated carefully because of their dependence to the initial assumptions. Such assumptions exhibit large variability and alternative means of these may return different final results.
Roe, Matthew T; Chen, Anita Y; Mehta, Rajendra H; Li, Yun; Brindis, Ralph G; Smith, Sidney C; Rumsfeld, John S; Gibler, W Brian; Ohman, E Magnus; Peterson, Eric D
2007-09-04
Since the broad dissemination of practice guidelines, the association of specialty care with the treatment of patients with acute coronary syndromes has not been studied. We evaluated 55 994 patients with non-ST-segment elevation acute coronary syndromes (ischemic ST-segment changes and/or positive cardiac markers) included in the CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the ACC/AHA Guidelines) Quality Improvement Initiative from January 2001 through September 2003 at 301 tertiary US hospitals with full revascularization capabilities. We compared baseline characteristics, the use of American College of Cardiology/American Heart Association guidelines class I recommendations, and in-hospital outcomes by the specialty of the primary in-patient service (cardiology versus noncardiology). A total of 35 374 patients (63.2%) were primarily cared for by a cardiology service, and these patients had lower-risk clinical characteristics, but they more commonly received acute (=24 hours) medications, invasive cardiac procedures, and discharge medications and lifestyle interventions. Acute care processes were improved when care was provided by a cardiology service regardless of the propensity to receive cardiology care. The adjusted risk of in-hospital mortality was lower with care provided by a cardiology service (adjusted odds ratio 0.80, 95% confidence interval 0.73 to 0.88), and adjustment for differences in the use of acute medications and invasive procedures partially attenuated this mortality difference (adjusted odds ratio 0.92, 95% confidence interval 0.83 to 1.02). Non-ST-segment elevation acute coronary syndrome patients primarily cared for by a cardiology inpatient service more commonly received evidence-based treatments and had a lower risk of mortality, but these patients had lower-risk clinical characteristics. Results from the present analysis highlight the difficulties with accurately determining how specialty care is associated with treatment patterns and clinical outcomes for patients with acute coronary syndromes. Novel methodologies for evaluating the influence of specialty care for these patients need to be developed and applied to future studies.
Narayanan, Shrikanth
2009-01-01
We describe a method for unsupervised region segmentation of an image using its spatial frequency domain representation. The algorithm was designed to process large sequences of real-time magnetic resonance (MR) images containing the 2-D midsagittal view of a human vocal tract airway. The segmentation algorithm uses an anatomically informed object model, whose fit to the observed image data is hierarchically optimized using a gradient descent procedure. The goal of the algorithm is to automatically extract the time-varying vocal tract outline and the position of the articulators to facilitate the study of the shaping of the vocal tract during speech production. PMID:19244005
Model-based segmentation of hand radiographs
NASA Astrophysics Data System (ADS)
Weiler, Frank; Vogelsang, Frank
1998-06-01
An important procedure in pediatrics is to determine the skeletal maturity of a patient from radiographs of the hand. There is great interest in the automation of this tedious and time-consuming task. We present a new method for the segmentation of the bones of the hand, which allows the assessment of the skeletal maturity with an appropriate database of reference bones, similar to the atlas based methods. The proposed algorithm uses an extended active contour model for the segmentation of the hand bones, which incorporates a-priori knowledge of shape and topology of the bones in an additional energy term. This `scene knowledge' is integrated in a complex hierarchical image model, that is used for the image analysis task.
A continuous analog of run length distributions reflecting accumulated fractionation events.
Yu, Zhe; Sankoff, David
2016-11-11
We propose a new, continuous model of the fractionation process (duplicate gene deletion after polyploidization) on the real line. The aim is to infer how much DNA is deleted at a time, based on segment lengths for alternating deleted (invisible) and undeleted (visible) regions. After deriving a number of analytical results for "one-sided" fractionation, we undertake a series of simulations that help us identify the distribution of segment lengths as a gamma with shape and rate parameters evolving over time. This leads to an inference procedure based on observed length distributions for visible and invisible segments. We suggest extensions of this mathematical and simulation work to biologically realistic discrete models, including two-sided fractionation.
USDA analyst review of the LACIE IMAGE-100 hybrid system test
NASA Technical Reports Server (NTRS)
Ashburn, P.; Buelow, K.; Hansen, H. L.; May, G. A. (Principal Investigator)
1979-01-01
Fifty operational segments from the U.S.S.R., 40 test segments from Canada, and 24 test segments from the United States were used to provide a wide range of geographic conditions for USDA analysts during a test to determine the effectiveness of labeling single pixel training fields (dots) using Procedure 1 on the 1-100 hybrid system, and clustering and classifying on the Earth Resources Interactive Processing System. The analysts had additional on-line capabilities such as interactive dot labeling, class or cluster map overlay flickers, and flashing of all dots of equal spectral value. Results on the 1-100 hybrid system are described and analyst problems and recommendations are discussed.
Robertson, Peter A; Armstrong, William A; Woods, Daniel L; Rawlinson, Jeremy J
2018-04-24
Controlled cadaveric study of surgical technique in Transforaminal and Posterior Lumbar Interbody Fusion (TLIF & PLIF) OBJECTIVE.: To evaluate the contribution of surgical techniques and cage variables in lordosis re-creation in posterior interbody fusion (TLIF/PLIF). The major contributors to lumbar lordosis are the lordotic lower lumbar discs. The pathologies requiring treatment with segmental fusion are frequently hypolordotic or kyphotic. Current posterior based interbody techniques have a poor track record for recreating lordosis, although re-creation of lordosis with optimum anatomical alignment is associated with better outcomes and reduced adjacent segment change needing revision. It is unclear whether surgical techniques or cage parameters contribute significantly to lordosis re-creation. Eight instrumented cadaveric motion segments were evaluated with pre and post experimental radiological assessment of lordosis. Each motion segment was instrumented with pedicle screw fixation to allow segmental stabilization. The surgical procedures were unilateral TLIF with an 18° lordotic and 27 mm length cage, unilateral TLIF (18°, 27 mm) with bilateral facetectomy, unilateral TLIF (18°, 27 mm) with posterior column osteotomy, PLIF with bilateral cages (18°, 22 mm), and PLIF with bilateral cages (24°, 22 mm). Cage insertion used and 'insert and rotate' technique. Pooled results demonstrated a mean increase in lordosis of 2.2° with each procedural step (Lordosis increase was serially 1.8°, 3.5°, 1.6°, 2.5° & 1.6° through the procedures). TLIF and PLIF with posterior column osteotomy increased lordosis significantly compared with Unilateral TLIF and TLIF with bilateral facetectomy. The major contributors to lordosis re-creation were posterior column osteotomy, and PLIF with paired shorter cages rather than TLIF. This study demonstrates that the surgical approach to posterior interbody surgery influences lordosis gain and posterior column osteotomy optimizes lordosis gain in TLIF. The bilateral cages used in PLIF are shorter and associated with further gain in lordosis. This information has the potential to aid surgical planning when attempting to recreate lordosis to optimize outcomes. N/A.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Q; Yan, D
2014-06-01
Purpose: Evaluate the accuracy of atlas-based auto segmentation of organs at risk (OARs) on both helical CT (HCT) and cone beam CT (CBCT) images in head and neck (HN) cancer adaptive radiotherapy (ART). Methods: Six HN patients treated in the ART process were included in this study. For each patient, three images were selected: pretreatment planning CT (PreTx-HCT), in treatment CT for replanning (InTx-HCT) and a CBCT acquired in the same day of the InTx-HCT. Three clinical procedures of auto segmentation and deformable registration performed in the ART process were evaluated: a) auto segmentation on PreTx-HCT using multi-subject atlases, b)more » intra-patient propagation of OARs from PreTx-HCT to InTx-HCT using deformable HCT-to-HCT image registration, and c) intra-patient propagation of OARs from PreTx-HCT to CBCT using deformable CBCT-to-HCT image registration. Seven OARs (brainstem, cord, L/R parotid, L/R submandibular gland and mandible) were manually contoured on PreTx-HCT and InTx-HCT for comparison. In addition, manual contours on InTx-CT were copied on the same day CBCT, and a local region rigid body registration was performed accordingly for each individual OAR. For procedures a) and b), auto contours were compared to manual contours, and for c) auto contours were compared to those rigidly transferred contours on CBCT. Dice similarity coefficients (DSC) and mean surface distances of agreement (MSDA) were calculated for evaluation. Results: For procedure a), the mean DSC/MSDA of most OARs are >80%/±2mm. For intra-patient HCT-to-HCT propagation, the Resultimproved to >85%/±1.5mm. Compared to HCT-to-HCT, the mean DSC for HCT-to-CBCT propagation drops ∼2–3% and MSDA increases ∼0.2mm. This Resultindicates that the inferior imaging quality of CBCT seems only degrade auto propagation performance slightly. Conclusion: Auto segmentation and deformable propagation can generate OAR structures on HCT and CBCT images with clinically acceptable accuracy. Therefore, they can be reliably implemented in the clinical HN ART process.« less
Pravastatin and endothelium dependent vasomotion after coronary angioplasty: the PREFACE trial.
Mulder, H J; Schalij, M J; Kauer, B; Visser, R F; van Dijkman, P R; Jukema, J W; Zwinderman, A H; Bruschke, A V
2001-11-01
To test the hypothesis that the 3-hydroxy-3-methylglutaryl coenzyme-A reductase inhibitor pravastatin ameliorates endothelium mediated responses of dilated coronary segments: the PREFACE (pravastatin related effects following angioplasty on coronary endothelium) trial. A double blind, randomised, placebo controlled, multicentre study. Four hospitals in the Netherlands. 63 non-smoking, non-hypercholesterolaemic patients scheduled for elective balloon angioplasty (pravastatin 34, placebo 29). The effects of three months of pravastatin treatment (40 mg daily) on endothelium dependent vasomotor function were studied. Balloon angioplasty was undertaken one month after randomisation, and coronary vasomotor function tests using acetylcholine were performed two months after balloon angioplasty. The angiograms were analysed quantitatively. The efficacy measure was the acetylcholine induced change in mean arterial diameter, determined in the dilated segment and in an angiographically normal segment of an adjacent non-manipulated coronary artery. Increasing acetylcholine doses produced vasoconstriction in the dilated segments (p = 0.004) but not in the normal segments. Pravastatin did not affect the vascular response to acetylcholine in either the dilated segments (p = 0.09) or the non-dilated sites. Endothelium dependent vasomotion in normal segments was correlated with that in dilated segments (r = 0.47, p < 0.001). There were fewer procedure related events in the pravastatin group than in the placebo group (p < 0.05). Endothelium dependent vasomotion in normal segments is correlated with that in dilated segments. A significant beneficial effect of pravastatin on endothelial function could not be shown, but in the dilated segments there was a trend towards a beneficial treatment effect in the pravastatin group.
Le Floc’h, Simon; Tracqui, Philippe; Finet, Gérard; Gharib, Ahmed M.; Maurice, Roch L.; Cloutier, Guy; Pettigrew, Roderic I.
2016-01-01
It is now recognized that prediction of the vulnerable coronary plaque rupture requires not only an accurate quantification of fibrous cap thickness and necrotic core morphology but also a precise knowledge of the mechanical properties of plaque components. Indeed, such knowledge would allow a precise evaluation of the peak cap-stress amplitude, which is known to be a good biomechanical predictor of plaque rupture. Several studies have been performed to reconstruct a Young’s modulus map from strain elastograms. It seems that the main issue for improving such methods does not rely on the optimization algorithm itself, but rather on preconditioning requiring the best estimation of the plaque components’ contours. The present theoretical study was therefore designed to develop: 1) a preconditioning model to extract the plaque morphology in order to initiate the optimization process, and 2) an approach combining a dynamic segmentation method with an optimization procedure to highlight the modulogram of the atherosclerotic plaque. This methodology, based on the continuum mechanics theory prescribing the strain field, was successfully applied to seven intravascular ultrasound coronary lesion morphologies. The reconstructed cap thickness, necrotic core area, calcium area, and the Young’s moduli of the calcium, necrotic core, and fibrosis were obtained with mean relative errors of 12%, 4% and 1%, 43%, 32%, and 2%, respectively. PMID:19164080
GRAPhEME: a setup to measure (n, xn γ) reaction cross sections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henning, Greg; Bacquias, A.; Capdevielle, O.
2015-07-01
Most of nuclear reactor developments are using evaluated data base for numerical simulations. However, the considered databases present still large uncertainties and disagreements. To improve their level of precision, new measurements are needed, in particular for (n, xn) reactions, which are of great importance as they modify the neutron spectrum, the neutron population, and produce radioactive species. The IPHC group started an experimental program to measure (n, xn gamma) reaction cross sections using prompt gamma spectroscopy and neutron energy determination by time of flight. Measurements of (n, xn gamma) cross section have been performed for {sup 235,238}U, {sup 232}Th, {supmore » nat,182,183,184,186}W, {sup nat}Zr. The experimental setup is installed at the neutron beam at GELINA (Geel, Belgium). The setup has recently been upgraded with the addition of a highly segmented 36 pixels planar HPGe detector. Significant efforts have been made to reduce radiation background and electromagnetic perturbations. The setup is equipped with a high rate digital acquisition system. The analysis of the segmented detector data requires a specific procedure to account for cross signals between pixels. An overall attention is paid to the precision of the measurement. The setup characteristic and the analysis procedure will be presented along with the acquisition and analysis challenges. Examples of results and their impact on models will be discussed. (authors)« less
Kawakubo, Kazumichi; Kawakami, Hiroshi; Kuwatani, Masaki; Kudo, Taiki; Abe, Yoko; Kawahata, Shuhei; Kubo, Kimitoshi; Kubota, Yoshimasa; Sakamoto, Naoya
2015-02-01
Bilateral self-expandable metallic stent (SEMS) placement for the management of unresectable malignant hilar biliary obstruction (UMHBO) is technically challenging to perform using the existing metallic stents with thick delivery systems. The recently developed 6-Fr delivery systems could facilitate a single-step simultaneous side-by-side placement through the accessory channel of the duodenoscope. The aim of this study was to evaluate the feasibility of this procedure. Between May and September 2013, 13 consecutive patients with UMHBO underwent a single-step simultaneous side-by-side placement of SEMS with the 6-Fr delivery system. The technical success rate, stent patency, and rate of complications were evaluated from the prospectively collected database. Technical success was achieved in 11 (84.6%, 95% confidence interval [CI]: 57.8-95.8) patients. The median procedure time was 25 min. Early and late complications were observed in 23% (one segmental cholangitis and two liver abscesses) and 15% (one segmental cholangitis and one cholecystitis) patients, respectively. Median dysfunction free patency was 263 days (95% CI: 37-263). Five patients (38%) experienced stent occlusion that was successfully managed by endoscopic stent placement. A single-step simultaneous side-by-side placement of SEMS with a 6-Fr delivery system was feasible for the management of UMHBO. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.
Kuwayama, Kenji; Nariai, Maika; Miyaguchi, Hajime; Iwata, Yuko T; Kanamori, Tatsuyuki; Tsujikawa, Kenji; Yamamuro, Tadashi; Segawa, Hiroki; Abe, Hiroko; Iwase, Hirotaro; Inoue, Hiroyuki
2018-07-01
Sleeping aids are often abused in the commission of drug-facilitated crimes. Generally, there is little evidence that a victim ingested a spiked drink unknowingly because the unconscious victim cannot report the situation to the police immediately after the crime occurred. Although conventional segmental hair analysis can estimate the number of months since a targeted drug was ingested, this analysis cannot determine the specific day of ingestion. We recently developed a method of micro-segmental hair analysis using internal temporal markers (ITMs) to estimate the day of drug ingestion. This method was based on volunteer ingestion of ITMs to determine a timescale within individual hair strands, by segmenting a single hair strand at 0.4-mm intervals, corresponding to daily hair growth. This study assessed the ability of this method to estimate the day of ingestion of an over-the-counter sleeping aid, diphenhydramine, which can be easily abused. To model ingestion of a diphenhydramine-spiked drink unknowingly, each subject ingested a dose of diphenhydramine, followed by ingestion of two doses of the ITM, chlorpheniramine, 14days apart. Several hair strands were collected from each subject's scalp several weeks after the second ITM ingestion. Diphenhydramine and ITM were detected at specific regions within individual hair strands. The day of diphenhydramine ingestion was estimated from the distances between the regions and the days of ITM ingestion. The error between estimated and actual ingestion day ranged from -0.1 to 1.9days regardless of subjects and hair collection times. The total time required for micro-segmental analysis of 96 hair segments (hair length: 3.84cm) was approximately 2days and the cost was almost the same as in general drug analysis. This procedure may be applicable to the investigation of crimes facilitated by various drugs. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Landgrebe, D. A. (Principal Investigator); Hixson, M. M.; Davis, B. J.; Bauer, M. E.
1978-01-01
The author has identified the following significant results. A stratification was performed and sample segments were selected for an initial investigation of multicrop problems in order to support development and evaluation of procedures for using LACIE and other technologies for the classification of corn and soybeans, to identify factors likely to affect classification performance, and to evaluate problems encountered and techniques which are applicable to the crop estimation problem in foreign countries. Two types of samples, low density and high density, supporting these requirements were selected as research data set for an initial evaluation of technical issues. Looking at the geographic location of the strata, the system appears to be logical and the various segments seem to represent different conditions. This result is supportive not only of the variables and the methodology employed in the stratification, but also of the validity of the data sets employed.
Qin, Wan
2017-01-01
Accurate measurement of edema volume is essential for the investigation of tissue response and recovery following a traumatic injury. The measurements must be noninvasive and repetitive over time so as to monitor tissue response throughout the healing process. Such techniques are particularly necessary for the evaluation of therapeutics that are currently in development to suppress or prevent edema formation. In this study, we propose to use optical coherence tomography (OCT) technique to image and quantify edema in a mouse ear model where the injury is induced by a superficial-thickness burn. Extraction of edema volume is achieved by an attenuation compensation algorithm performed on the three-dimensional OCT images, followed by two segmentation procedures. In addition to edema volume, the segmentation method also enables accurate thickness mapping of edematous tissue, which is an important characteristic of the external symptoms of edema. To the best of our knowledge, this is the first method for noninvasively measuring absolute edema volume. PMID:27282161
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schalkoff, R.J.
This report summarizes work after 4 years of a 3-year project (no-cost extension of the above-referenced project for a period of 12 months granted). The fourth generation of a vision sensing head for geometric and photometric scene sensing has been built and tested. Estimation algorithms for automatic sensor calibration updating under robot motion have been developed and tested. We have modified the geometry extraction component of the rendering pipeline. Laser scanning now produces highly accurate points on segmented curves. These point-curves are input to a NURBS (non-uniform rational B-spline) skinning procedure to produce interpolating surface segments. The NURBS formulation includesmore » quadrics as a sub-class, thus this formulation allows much greater flexibility without the attendant instability of generating an entire quadric surface. We have also implemented correction for diffuse lighting and specular effects. The QRobot joint level control was extended to a complete semi-autonomous robot control system for D and D operations. The imaging and VR subsystems have been integrated and tested.« less
Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC (version 4.0) technical manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1988-01-01
The information contained in the NASARC (Version 4.0) Technical Manual and NASARC (Version 4.0) User's Manual relates to the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbits. Array dimensions within the software were structured to fit within the currently available 12 megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.0) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.
NASA Astrophysics Data System (ADS)
Islam, Amina; Chevalier, Sylvie; Sassi, Mohamed
2018-04-01
With advances in imaging techniques and computational power, Digital Rock Physics (DRP) is becoming an increasingly popular tool to characterize reservoir samples and determine their internal structure and flow properties. In this work, we present the details for imaging, segmentation, as well as numerical simulation of single-phase flow through a standard homogenous Silurian dolomite core plug sample as well as a heterogeneous sample from a carbonate reservoir. We develop a procedure that integrates experimental results into the segmentation step to calibrate the porosity. We also look into using two different numerical tools for the simulation; namely Avizo Fire Xlab Hydro that solves the Stokes' equations via the finite volume method and Palabos that solves the same equations using the Lattice Boltzmann Method. Representative Elementary Volume (REV) and isotropy studies are conducted on the two samples and we show how DRP can be a useful tool to characterize rock properties that are time consuming and costly to obtain experimentally.
1994-07-01
REQUIRED MIX OF SEGMENTS OR INDIVIDUAL DATA ELEMENTS TO BE EXTRACTED. IN SEGMENT R ON AN INTERROGATION TRANSACTION (LTI), DATA RECORD NUMBER (DRN 0950) ONLY...and zation and Marketing input DICs. insert the Continuation Indicator Code (DRN 8555) in position 80 of this record. Maximum of OF The assigned NSN...for Procurement KFR, File Data Minus Security Classified Characteristics Data KFC 8.5-2 DoD 4100.39-M Volume 8 CHAPTER 5 ALPHABETIC INDEX OF DIC
Dipylidium caninum infection in dogs infested with fleas.
Wani, Z A; Allaie, I M; Shah, B M; Raies, A; Athar, H; Junaid, S
2015-03-01
The present study pertains to the Dipylidium caninum infection in dogs infested with fleas. Twenty dogs were presented to the Divison of Surgery, SKUAST-K for different surgical procedures. Majority of the dogs had a history of pruritus, loss of weight as well as rubbing their perineal region against the wall. On external examination dogs were found infested with Ctenocephalides canis. When dogs were anesthetized, motile segments were seen coming out of their anus, which were then identified as mature segments of D. caninum.
C-shaped specimen plane strain fracture toughness tests. [metallic materials
NASA Technical Reports Server (NTRS)
Buzzard, R. T.; Fisher, D. M.
1977-01-01
Test equipment, procedures, and data obtained in the evaluation of C-shaped specimens are presented. Observations reported on include: specimen preparation and dimensional measurement; modifications to the standard ASTM E 399 displacement gage, which permit punch mark gage point engagement; and a measurement device for determining the interior and exterior radii of ring segments. Load displacement ratios were determined experimentally which agreed with analytically determined coefficients for three different gage lengths on the inner surfaces of radially-cracked ring segments.
Morales, Juan; Alonso-Nanclares, Lidia; Rodríguez, José-Rodrigo; DeFelipe, Javier; Rodríguez, Ángel; Merchán-Pérez, Ángel
2011-01-01
The synapses in the cerebral cortex can be classified into two main types, Gray's type I and type II, which correspond to asymmetric (mostly glutamatergic excitatory) and symmetric (inhibitory GABAergic) synapses, respectively. Hence, the quantification and identification of their different types and the proportions in which they are found, is extraordinarily important in terms of brain function. The ideal approach to calculate the number of synapses per unit volume is to analyze 3D samples reconstructed from serial sections. However, obtaining serial sections by transmission electron microscopy is an extremely time consuming and technically demanding task. Using focused ion beam/scanning electron microscope microscopy, we recently showed that virtually all synapses can be accurately identified as asymmetric or symmetric synapses when they are visualized, reconstructed, and quantified from large 3D tissue samples obtained in an automated manner. Nevertheless, the analysis, segmentation, and quantification of synapses is still a labor intensive procedure. Thus, novel solutions are currently necessary to deal with the large volume of data that is being generated by automated 3D electron microscopy. Accordingly, we have developed ESPINA, a software tool that performs the automated segmentation and counting of synapses in a reconstructed 3D volume of the cerebral cortex, and that greatly facilitates and accelerates these processes. PMID:21633491
Rapid detection and subtyping of human influenza A viruses and reassortants by pyrosequencing.
Deng, Yi-Mo; Caldwell, Natalie; Barr, Ian G
2011-01-01
Given the continuing co-circulation of the 2009 H1N1 pandemic influenza A viruses with seasonal H3N2 viruses, rapid and reliable detection of newly emerging influenza reassortant viruses is important to enhance our influenza surveillance. A novel pyrosequencing assay was developed for the rapid identification and subtyping of potential human influenza A virus reassortants based on all eight gene segments of the virus. Except for HA and NA genes, one universal set of primers was used to amplify and subtype each of the six internal genes. With this method, all eight gene segments of 57 laboratory isolates and 17 original specimens of seasonal H1N1, H3N2 and 2009 H1N1 pandemic viruses were correctly matched with their corresponding subtypes. In addition, this method was shown to be capable of detecting reassortant viruses by correctly identifying the source of all 8 gene segments from three vaccine production reassortant viruses and three H1N2 viruses. In summary, this pyrosequencing assay is a sensitive and specific procedure for screening large numbers of viruses for reassortment events amongst the commonly circulating human influenza A viruses, which is more rapid and cheaper than using conventional sequencing approaches.
Rapid Detection and Subtyping of Human Influenza A Viruses and Reassortants by Pyrosequencing
Deng, Yi-Mo; Caldwell, Natalie; Barr, Ian G.
2011-01-01
Background Given the continuing co-circulation of the 2009 H1N1 pandemic influenza A viruses with seasonal H3N2 viruses, rapid and reliable detection of newly emerging influenza reassortant viruses is important to enhance our influenza surveillance. Methodology/Principal Findings A novel pyrosequencing assay was developed for the rapid identification and subtyping of potential human influenza A virus reassortants based on all eight gene segments of the virus. Except for HA and NA genes, one universal set of primers was used to amplify and subtype each of the six internal genes. With this method, all eight gene segments of 57 laboratory isolates and 17 original specimens of seasonal H1N1, H3N2 and 2009 H1N1 pandemic viruses were correctly matched with their corresponding subtypes. In addition, this method was shown to be capable of detecting reassortant viruses by correctly identifying the source of all 8 gene segments from three vaccine production reassortant viruses and three H1N2 viruses. Conclusions/Significance In summary, this pyrosequencing assay is a sensitive and specific procedure for screening large numbers of viruses for reassortment events amongst the commonly circulating human influenza A viruses, which is more rapid and cheaper than using conventional sequencing approaches. PMID:21886790
de Souza Baptista, Roberto; Bo, Antonio P L; Hayashibe, Mitsuhiro
2017-06-01
Performance assessment of human movement is critical in diagnosis and motor-control rehabilitation. Recent developments in portable sensor technology enable clinicians to measure spatiotemporal aspects to aid in the neurological assessment. However, the extraction of quantitative information from such measurements is usually done manually through visual inspection. This paper presents a novel framework for automatic human movement assessment that executes segmentation and motor performance parameter extraction in time-series of measurements from a sequence of human movements. We use the elements of a Switching Linear Dynamic System model as building blocks to translate formal definitions and procedures from human movement analysis. Our approach provides a method for users with no expertise in signal processing to create models for movements using labeled dataset and later use it for automatic assessment. We validated our framework on preliminary tests involving six healthy adult subjects that executed common movements in functional tests and rehabilitation exercise sessions, such as sit-to-stand and lateral elevation of the arms and five elderly subjects, two of which with limited mobility, that executed the sit-to-stand movement. The proposed method worked on random motion sequences for the dual purpose of movement segmentation (accuracy of 72%-100%) and motor performance assessment (mean error of 0%-12%).
Virtual endoscopy using spherical QuickTime-VR panorama views.
Tiede, Ulf; von Sternberg-Gospos, Norman; Steiner, Paul; Höhne, Karl Heinz
2002-01-01
Virtual endoscopy needs some precomputation of the data (segmentation, path finding) before the diagnostic process can take place. We propose a method that precomputes multinode spherical panorama movies using Quick-Time VR. This technique allows almost the same navigation and visualization capabilities as a real endoscopic procedure, a significant reduction of interaction input is achieved and the movie represents a document of the procedure.
Development of a composite geodetic structure for space construction, phase 2
NASA Technical Reports Server (NTRS)
1981-01-01
Primary physical and mechanical properties were defined for pultruded hybrid HMS/E-glass P1700 rod material used for the fabrication of geodetic beams. Key properties established were used in the analysis, design, fabrication, instrumentation, and testing of a geodetic parameter cylinder and a lattice cone closeout joined to a short cylindrical geodetic beam segment. Requirements of structural techniques were accomplished. Analytical procedures were refined and extended to include the effect of rod dimensions for the helical and longitudinal members on local buckling, and the effect of different flexural and extensional moduli on general instability buckling.
On the dynamics of chain systems. [applications in manipulator and human body models
NASA Technical Reports Server (NTRS)
Huston, R. L.; Passerello, C. E.
1974-01-01
A computer-oriented method for obtaining dynamical equations of motion for chain systems is presented. A chain system is defined as an arbitrarily assembled set of rigid bodies such that adjoining bodies have at least one common point and such that closed loops are not formed. The equations of motion are developed through the use of Lagrange's form of d'Alembert's principle. The method and procedure is illustrated with an elementary study of a tripod space manipulator. The method is designed for application with systems such as human body models, chains and cables, and dynamic finite-segment models.
Multiphoton Scattering Tomography with Coherent States.
Ramos, Tomás; García-Ripoll, Juan José
2017-10-13
In this work we develop an experimental procedure to interrogate the single- and multiphoton scattering matrices of an unknown quantum system interacting with propagating photons. Our proposal requires coherent state laser or microwave inputs and homodyne detection at the scatterer's output, and provides simultaneous information about multiple-elastic and inelastic-segments of the scattering matrix. The method is resilient to detector noise and its errors can be made arbitrarily small by combining experiments at various laser powers. Finally, we show that the tomography of scattering has to be performed using pulsed lasers to efficiently gather information about the nonlinear processes in the scatterer.
NASA Technical Reports Server (NTRS)
Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)
2008-01-01
A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.
Segmentation of kidney using C-V model and anatomy priors
NASA Astrophysics Data System (ADS)
Lu, Jinghua; Chen, Jie; Zhang, Juan; Yang, Wenjia
2007-12-01
This paper presents an approach for kidney segmentation on abdominal CT images as the first step of a virtual reality surgery system. Segmentation for medical images is often challenging because of the objects' complicated anatomical structures, various gray levels, and unclear edges. A coarse to fine approach has been applied in the kidney segmentation using Chan-Vese model (C-V model) and anatomy prior knowledge. In pre-processing stage, the candidate kidney regions are located. Then C-V model formulated by level set method is applied in these smaller ROI, which can reduce the calculation complexity to a certain extent. At last, after some mathematical morphology procedures, the specified kidney structures have been extracted interactively with prior knowledge. The satisfying results on abdominal CT series show that the proposed approach keeps all the advantages of C-V model and overcome its disadvantages.
Focal liver lesions segmentation and classification in nonenhanced T2-weighted MRI.
Gatos, Ilias; Tsantis, Stavros; Karamesini, Maria; Spiliopoulos, Stavros; Karnabatidis, Dimitris; Hazle, John D; Kagadis, George C
2017-07-01
To automatically segment and classify focal liver lesions (FLLs) on nonenhanced T2-weighted magnetic resonance imaging (MRI) scans using a computer-aided diagnosis (CAD) algorithm. 71 FLLs (30 benign lesions, 19 hepatocellular carcinomas, and 22 metastases) on T2-weighted MRI scans were delineated by the proposed CAD scheme. The FLL segmentation procedure involved wavelet multiscale analysis to extract accurate edge information and mean intensity values for consecutive edges computed using horizontal and vertical analysis that were fed into the subsequent fuzzy C-means algorithm for final FLL border extraction. Texture information for each extracted lesion was derived using 42 first- and second-order textural features from grayscale value histogram, co-occurrence, and run-length matrices. Twelve morphological features were also extracted to capture any shape differentiation between classes. Feature selection was performed with stepwise multilinear regression analysis that led to a reduced feature subset. A multiclass Probabilistic Neural Network (PNN) classifier was then designed and used for lesion classification. PNN model evaluation was performed using the leave-one-out (LOO) method and receiver operating characteristic (ROC) curve analysis. The mean overlap between the automatically segmented FLLs and the manual segmentations performed by radiologists was 0.91 ± 0.12. The highest classification accuracies in the PNN model for the benign, hepatocellular carcinoma, and metastatic FLLs were 94.1%, 91.4%, and 94.1%, respectively, with sensitivity/specificity values of 90%/97.3%, 89.5%/92.2%, and 90.9%/95.6% respectively. The overall classification accuracy for the proposed system was 90.1%. Our diagnostic system using sophisticated FLL segmentation and classification algorithms is a powerful tool for routine clinical MRI-based liver evaluation and can be a supplement to contrast-enhanced MRI to prevent unnecessary invasive procedures. © 2017 American Association of Physicists in Medicine.
Akbar, Saleem; Dhar, Shabir A.
2008-01-01
To assess the efficacy and feasibility of vertebroplasty and posterior short-segment pedicle screw fixation for the treatment of traumatic lumbar burst fractures. Short-segment pedicle screw instrumentation is a well described technique to reduce and stabilize thoracic and lumbar spine fractures. It is relatively a easy procedure but can only indirectly reduce a fractured vertebral body, and the means of augmenting the anterior column are limited. Hardware failure and a loss of reduction are recognized complications caused by insufficient anterior column support. Patients with traumatic lumbar burst fractures without neurologic deficits were included. After a short segment posterior reduction and fixation, bilateral transpedicular reduction of the endplate was performed using a balloon, and polymethyl methacrylate cement was injected. Pre-operative and post-operative central and anterior heights were assessed with radiographs and MRI. Sixteen patients underwent this procedure, and a substantial reduction of the endplates could be achieved with the technique. All patients recovered uneventfully, and the neurologic examination revealed no deficits. The post-operative radiographs and magnetic resonance images demonstrated a good fracture reduction and filling of the bone defect without unwarranted bone displacement. The central and anterior height of the vertebral body could be restored to 72 and 82% of the estimated intact height, respectively. Complications were cement leakage in three cases without clinical implications and one superficial wound infection. Posterior short-segment pedicle fixation in conjunction with balloon vertebroplasty seems to be a feasible option in the management of lumbar burst fractures, thereby addressing all the three columns through a single approach. Although cement leakage occurred but had no clinical consequences or neurological deficit. PMID:18193300
Advanced Dispersed Fringe Sensing Algorithm for Coarse Phasing Segmented Mirror Telescopes
NASA Technical Reports Server (NTRS)
Spechler, Joshua A.; Hoppe, Daniel J.; Sigrist, Norbert; Shi, Fang; Seo, Byoung-Joon; Bikkannavar, Siddarayappa A.
2013-01-01
Segment mirror phasing, a critical step of segment mirror alignment, requires the ability to sense and correct the relative pistons between segments from up to a few hundred microns to a fraction of wavelength in order to bring the mirror system to its full diffraction capability. When sampling the aperture of a telescope, using auto-collimating flats (ACFs) is more economical. The performance of a telescope with a segmented primary mirror strongly depends on how well those primary mirror segments can be phased. One such process to phase primary mirror segments in the axial piston direction is dispersed fringe sensing (DFS). DFS technology can be used to co-phase the ACFs. DFS is essentially a signal fitting and processing operation. It is an elegant method of coarse phasing segmented mirrors. DFS performance accuracy is dependent upon careful calibration of the system as well as other factors such as internal optical alignment, system wavefront errors, and detector quality. Novel improvements to the algorithm have led to substantial enhancements in DFS performance. The Advanced Dispersed Fringe Sensing (ADFS) Algorithm is designed to reduce the sensitivity to calibration errors by determining the optimal fringe extraction line. Applying an angular extraction line dithering procedure and combining this dithering process with an error function while minimizing the phase term of the fitted signal, defines in essence the ADFS algorithm.
Segmentation-less Digital Rock Physics
NASA Astrophysics Data System (ADS)
Tisato, N.; Ikeda, K.; Goldfarb, E. J.; Spikes, K. T.
2017-12-01
In the last decade, Digital Rock Physics (DRP) has become an avenue to investigate physical and mechanical properties of geomaterials. DRP offers the advantage of simulating laboratory experiments on numerical samples that are obtained from analytical methods. Potentially, DRP could allow sparing part of the time and resources that are allocated to perform complicated laboratory tests. Like classic laboratory tests, the goal of DRP is to estimate accurately physical properties of rocks like hydraulic permeability or elastic moduli. Nevertheless, the physical properties of samples imaged using micro-computed tomography (μCT) are estimated through segmentation of the μCT dataset. Segmentation proves to be a challenging and arbitrary procedure that typically leads to inaccurate estimates of physical properties. Here we present a novel technique to extract physical properties from a μCT dataset without the use of segmentation. We show examples in which we use segmentation-less method to simulate elastic wave propagation and pressure wave diffusion to estimate elastic properties and permeability, respectively. The proposed method takes advantage of effective medium theories and uses the density and the porosity that are measured in the laboratory to constrain the results. We discuss the results and highlight that segmentation-less DRP is more accurate than segmentation based DRP approaches and theoretical modeling for the studied rock. In conclusion, the segmentation-less approach here presented seems to be a promising method to improve accuracy and to ease the overall workflow of DRP.
A Novel Face-on-Face Contact Method for Nonlinear Solid Mechanics
NASA Astrophysics Data System (ADS)
Wopschall, Steven Robert
The implicit solution to contact problems in nonlinear solid mechanics poses many difficulties. Traditional node-to-segment methods may suffer from locking and experience contact force chatter in the presence of sliding. More recent developments include mortar based methods, which resolve local contact interactions over face-pairs and feature a kinematic constraint in integral form that smoothes contact behavior, especially in the presence of sliding. These methods have been shown to perform well in the presence of geometric nonlinearities and are demonstratively more robust than node-to-segment methods. These methods are typically biased, however, interpolating contact tractions and gap equations on a designated non-mortar face, which leads to an asymmetry in the formulation. Another challenge is constraint enforcement. The general selection of the active set of constraints is brought with difficulty, often leading to non-physical solutions and easily resulting in missed face-pair interactions. Details on reliable constraint enforcement methods are lacking in the greater contact literature. This work presents an unbiased contact formulation utilizing a median-plane methodology. Up to linear polynomials are used for the discrete pressure representation and integral gap constraints are enforced using a novel subcycling procedure. This procedure reliably determines the active set of contact constraints leading to physical and kinematically admissible solutions void of heuristics and user action. The contact method presented herein successfully solves difficult quasi-static contact problems in the implicit computational setting. These problems feature finite deformations, material nonlinearity, and complex interface geometries, all of which are challenging characteristics for contact implementations and constraint enforcement algorithms. The subcycling procedure is a key feature of this method, handling active constraint selection for complex interfaces and mesh geometries.
Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model
NASA Astrophysics Data System (ADS)
Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.
2009-04-01
The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.
Orcutt, Sonia T.; Kobayashi, Katsuhiro; Sultenfuss, Mark; Hailey, Brian S.; Sparks, Anthony; Satpathy, Bighnesh; Anaya, Daniel A.
2016-01-01
Preoperative portal vein embolization (PVE) is used to extend the indications for major hepatic resection, and it has become the standard of care for selected patients with hepatic malignancies treated at major hepatobiliary centers. To date, various techniques with different embolic materials have been used with similar results in the degree of liver hypertrophy. Regardless of the specific strategy used, both surgeons and interventional radiologists must be familiar with each other’s techniques to be able to create the optimal plan for each individual patient. Knowledge of the segmental anatomy of the liver is paramount to fully understand the liver segments that need to be embolized and resected. Understanding the portal vein anatomy and the branching variations, along with the techniques used to transect the portal vein during hepatic resection, is important because these variables can affect the PVE procedure and the eventual surgical resection. Comprehension of the advantages and disadvantages of approaches to the portal venous system and the various embolic materials used for PVE is essential to best tailor the procedures for each patient and to avoid complications. Before PVE, meticulous assessment of the portal vein branching anatomy is performed with cross-sectional imaging, and embolization strategies are developed based on the patient’s anatomy. The PVE procedure consists of several technical steps, and knowledge of these technical tips, potential complications, and how to avoid the complications in each step is of great importance for safe and successful PVE and ultimately successful hepatectomy. Because PVE is used as an adjunct to planned hepatic resection, priority must always be placed on safety, without compromising the integrity of the future liver remnant, and close collaboration between interventional radiologists and hepatobiliary surgeons is essential to achieve successful outcomes. PMID:27014696
Miyasaka, Masaki; Tada, Norio; Kato, Shigeaki; Kami, Masahiro; Horie, Kazunori; Honda, Taku; Takizawa, Kaname; Otomo, Tatsushi; Inoue, Naoto
2016-05-01
The aim of this study was to assess the safety and efficacy of sheathless guide catheters in transradial percutaneous coronary intervention (PCI) for ST-segment elevation myocardial infarction (STEMI). Transradial PCI for STEMI offers significant clinical benefits, including a reduced incidence of vascular complications. As the size of the radial artery is small, the radial artery is frequently damaged in this procedure using large-bore catheters. A sheathless guide catheter offers a solution to this problem as it does not require an introducer sheath. However, the efficacy and safety of sheathless guide catheters remain to be fully determined in emergent transradial PCI for STEMI. Data on consecutive STEMI patients undergoing primary PCI at the Sendai Kousei Hospital between September 2010 and May 2013 were analyzed. The primary endpoint was the rate of acute procedural success without access site crossover. Secondary endpoints included door-to-balloon time, fluoroscopy time, volume of contrast, and radial artery stenosis or occlusion rate. We conducted transradial PCI for 478 patients with STEMI using a sheathless guide catheter. Acute procedural success was achieved in 466 patients (97.5%). The median door-to-balloon time was 45 min (range, 15-317 min). The median fluoroscopy time was 16.4 min (range, 10-90 min). The median volume of contrast was 134 mL (range, 31-431 mL). Radial stenosis or occlusion developed in 14 (3.8%) of the 370 evaluable patients. This study showed that use of a sheathless guide catheter taking a transradial approach was effective and safe in primary PCI for STEMI. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Computer-Mediated Assessment of Intelligibility in Aphasia and Apraxia of Speech
Haley, Katarina L.; Roth, Heidi; Grindstaff, Enetta; Jacks, Adam
2011-01-01
Background Previous work indicates that single word intelligibility tests developed for dysarthria are sensitive to segmental production errors in aphasic individuals with and without apraxia of speech. However, potential listener learning effects and difficulties adapting elicitation procedures to coexisting language impairments limit their applicability to left hemisphere stroke survivors. Aims The main purpose of this study was to examine basic psychometric properties for a new monosyllabic intelligibility test developed for individuals with aphasia and/or AOS. A related purpose was to examine clinical feasibility and potential to standardize a computer-mediated administration approach. Methods & Procedures A 600-item monosyllabic single word intelligibility test was constructed by assembling sets of phonetically similar words. Custom software was used to select 50 target words from this test in a pseudo-random fashion and to elicit and record production of these words by 23 speakers with aphasia and 20 neurologically healthy participants. To evaluate test-retest reliability, two identical sets of 50-word lists were elicited by requesting repetition after a live speaker model. To examine the effect of a different word set and auditory model, an additional set of 50 different words was elicited with a pre-recorded model. The recorded words were presented to normal-hearing listeners for identification via orthographic and multiple-choice response formats. To examine construct validity, production accuracy for each speaker was estimated via phonetic transcription and rating of overall articulation. Outcomes & Results Recording and listening tasks were completed in less than six minutes for all speakers and listeners. Aphasic speakers were significantly less intelligible than neurologically healthy speakers and displayed a wide range of intelligibility scores. Test-retest and inter-listener reliability estimates were strong. No significant difference was found in scores based on recordings from a live model versus a pre-recorded model, but some individual speakers favored the live model. Intelligibility test scores correlated highly with segmental accuracy derived from broad phonetic transcription of the same speech sample and a motor speech evaluation. Scores correlated moderately with rated articulation difficulty. Conclusions We describe a computerized, single-word intelligibility test that yields clinically feasible, reliable, and valid measures of segmental speech production in adults with aphasia. This tool can be used in clinical research to facilitate appropriate participant selection and to establish matching across comparison groups. For a majority of speakers, elicitation procedures can be standardized by using a pre-recorded auditory model for repetition. This assessment tool has potential utility for both clinical assessment and outcomes research. PMID:22215933
Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras
Morris, Mark; Sellers, William I.
2015-01-01
Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778
Automatic MRI 2D brain segmentation using graph searching technique.
Pedoia, Valentina; Binaghi, Elisabetta
2013-09-01
Accurate and efficient segmentation of the whole brain in magnetic resonance (MR) images is a key task in many neuroscience and medical studies either because the whole brain is the final anatomical structure of interest or because the automatic extraction facilitates further analysis. The problem of segmenting brain MRI images has been extensively addressed by many researchers. Despite the relevant achievements obtained, automated segmentation of brain MRI imagery is still a challenging problem whose solution has to cope with critical aspects such as anatomical variability and pathological deformation. In the present paper, we describe and experimentally evaluate a method for segmenting brain from MRI images basing on two-dimensional graph searching principles for border detection. The segmentation of the whole brain over the entire volume is accomplished slice by slice, automatically detecting frames including eyes. The method is fully automatic and easily reproducible by computing the internal main parameters directly from the image data. The segmentation procedure is conceived as a tool of general applicability, although design requirements are especially commensurate with the accuracy required in clinical tasks such as surgical planning and post-surgical assessment. Several experiments were performed to assess the performance of the algorithm on a varied set of MRI images obtaining good results in terms of accuracy and stability. Copyright © 2012 John Wiley & Sons, Ltd.
Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras.
Peyer, Kathrin E; Morris, Mark; Sellers, William I
2015-01-01
Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints.
Tracking cells in Life Cell Imaging videos using topological alignments.
Mosig, Axel; Jäger, Stefan; Wang, Chaofeng; Nath, Sumit; Ersoy, Ilker; Palaniappan, Kannap-pan; Chen, Su-Shing
2009-07-16
With the increasing availability of live cell imaging technology, tracking cells and other moving objects in live cell videos has become a major challenge for bioimage informatics. An inherent problem for most cell tracking algorithms is over- or under-segmentation of cells - many algorithms tend to recognize one cell as several cells or vice versa. We propose to approach this problem through so-called topological alignments, which we apply to address the problem of linking segmentations of two consecutive frames in the video sequence. Starting from the output of a conventional segmentation procedure, we align pairs of consecutive frames through assigning sets of segments in one frame to sets of segments in the next frame. We achieve this through finding maximum weighted solutions to a generalized "bipartite matching" between two hierarchies of segments, where we derive weights from relative overlap scores of convex hulls of sets of segments. For solving the matching task, we rely on an integer linear program. Practical experiments demonstrate that the matching task can be solved efficiently in practice, and that our method is both effective and useful for tracking cells in data sets derived from a so-called Large Scale Digital Cell Analysis System (LSDCAS). The source code of the implementation is available for download from http://www.picb.ac.cn/patterns/Software/topaln.
Narayan, Nikhil S; Marziliano, Pina
2015-08-01
Automatic detection and segmentation of the common carotid artery in transverse ultrasound (US) images of the thyroid gland play a vital role in the success of US guided intervention procedures. We propose in this paper a novel method to accurately detect, segment and track the carotid in 2D and 2D+t US images of the thyroid gland using concepts based on tissue echogenicity and ultrasound image formation. We first segment the hypoechoic anatomical regions of interest using local phase and energy in the input image. We then make use of a Hessian based blob like analysis to detect the carotid within the segmented hypoechoic regions. The carotid artery is segmented by making use of least squares ellipse fit for the edge points around the detected carotid candidate. Experiments performed on a multivendor dataset of 41 images show that the proposed algorithm can segment the carotid artery with high sensitivity (99.6 ±m 0.2%) and specificity (92.9 ±m 0.1%). Further experiments on a public database containing 971 images of the carotid artery showed that the proposed algorithm can achieve a detection accuracy of 95.2% with a 2% increase in performance when compared to the state-of-the-art method.
A statistical pixel intensity model for segmentation of confocal laser scanning microscopy images.
Calapez, Alexandre; Rosa, Agostinho
2010-09-01
Confocal laser scanning microscopy (CLSM) has been widely used in the life sciences for the characterization of cell processes because it allows the recording of the distribution of fluorescence-tagged macromolecules on a section of the living cell. It is in fact the cornerstone of many molecular transport and interaction quantification techniques where the identification of regions of interest through image segmentation is usually a required step. In many situations, because of the complexity of the recorded cellular structures or because of the amounts of data involved, image segmentation either is too difficult or inefficient to be done by hand and automated segmentation procedures have to be considered. Given the nature of CLSM images, statistical segmentation methodologies appear as natural candidates. In this work we propose a model to be used for statistical unsupervised CLSM image segmentation. The model is derived from the CLSM image formation mechanics and its performance is compared to the existing alternatives. Results show that it provides a much better description of the data on classes characterized by their mean intensity, making it suitable not only for segmentation methodologies with known number of classes but also for use with schemes aiming at the estimation of the number of classes through the application of cluster selection criteria.
NASA Astrophysics Data System (ADS)
Poon, Eric; Thondapu, Vikas; Barlis, Peter; Ooi, Andrew
2017-11-01
Coronary artery disease remains a major cause of mortality in developed countries, and is most often due to a localized flow-limiting stenosis, or narrowing, of coronary arteries. Patients often undergo invasive procedures such as X-ray angiography and fractional flow reserve to diagnose flow-limiting lesions. Even though such diagnostic techniques are well-developed, the effects of diseased coronary segments on local flow are still poorly understood. Therefore, this study investigated the effect of irregular geometries of diseased coronary segments on the macro-recirculation and local pressure minimum regions. We employed an idealized coronary artery model with a diameter of stenosis of 75%. By systematically adjusting the eccentricity and the asymmetry of the coronary stenosis, we uncovered an increase in macro-recirculation size. Most importantly, the presence of this macro-recirculation signifies a local pressure minimum (identified by λ2 vortex identification method). This local pressure minimum has a profound effect on the pressure drops in both longitudinal and planar directions, which has implications for diagnosis and treatment of coronary artery disease. Supported by Australian Research Council LP150100233 and National Computational Infrastructure m45.
Spinal Ischemia in Thoracic Aortic Procedures: Impact of Radiculomedullary Artery Distribution.
Kari, Fabian A; Wittmann, Karin; Krause, Sonja; Saravi, Babak; Puttfarcken, Luisa; Förster, Katharina; Rylski, Bartosz; Maier, Sven; Göbel, Ulrich; Siepe, Matthias; Czerny, Martin; Beyersdorf, Friedhelm
2017-12-01
The aim of this study was to assess the influence of thoracic anterior radiculomedullary artery (tARMA) distribution on spinal cord perfusion in a thoracic aortic surgical model. Twenty-six pigs (34 ± 3 kg; study group, n = 20; sham group, n = 6) underwent ligation of the left subclavian artery and thoracic segmental arteries. End points were spinal cord perfusion pressure (SCPP), regional spinal cord blood flow (SCBF), and neurologic outcome with an observation time of 3 hours. tARMA distribution patterns tested for an effect on end points included (1) maximum distance between any 2 tARMAs within the treated aortic segment (0 or 1 segment = small-distance group; >1 segment = large-distance group) and (2) distance between the end of the treated aortic segment and the first distal tARMA (at the level of the distal simulated stent-graft end = group 0; gap of 1 or more segments = group ≥1). The number of tARMA ranged from 3 to 13 (mean, 8). In the large-distance group, SCBF dropped from 0.48 ± 0.16 mL/g/min to 0.3 ± 0.08 mL/g/min (p < 0.001). We observed no detectable SCBF drop in the small-distance group: 0.2 ± 0.05 mL/g/min at baseline to 0.23 ± 0.05 mL/g/min immediately after clamping (p = 0.147). SCBF increased from 0.201 ± 0.055 mL/g/min at baseline to 0.443 ± 0.051 mL/g/min at 3 hours postoperatively (p < 0.001) only in the small-distance group. We demonstrate experimental data showing that distribution patterns of tARMAs correlate with the degree of SCBF drop and insufficient reactive parenchymal hyperemia in aortic procedures. Individual ARMA distribution patterns along the treated aortic segment could help us predict the individual risk of spinal ischemia. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Lüddemann, Tobias; Egger, Jan
2016-04-01
Among all types of cancer, gynecological malignancies belong to the fourth most frequent type of cancer among women. In addition to chemotherapy and external beam radiation, brachytherapy is the standard procedure for the treatment of these malignancies. In the progress of treatment planning, localization of the tumor as the target volume and adjacent organs of risks by segmentation is crucial to accomplish an optimal radiation distribution to the tumor while simultaneously preserving healthy tissue. Segmentation is performed manually and represents a time-consuming task in clinical daily routine. This study focuses on the segmentation of the rectum/sigmoid colon as an organ-at-risk in gynecological brachytherapy. The proposed segmentation method uses an interactive, graph-based segmentation scheme with a user-defined template. The scheme creates a directed two-dimensional graph, followed by the minimal cost closed set computation on the graph, resulting in an outlining of the rectum. The graph's outline is dynamically adapted to the last calculated cut. Evaluation was performed by comparing manual segmentations of the rectum/sigmoid colon to results achieved with the proposed method. The comparison of the algorithmic to manual result yielded a dice similarity coefficient value of [Formula: see text], in comparison to [Formula: see text] for the comparison of two manual segmentations by the same physician. Utilizing the proposed methodology resulted in a median time of [Formula: see text], compared to 300 s needed for pure manual segmentation.
Interactive and scale invariant segmentation of the rectum/sigmoid via user-defined templates
NASA Astrophysics Data System (ADS)
Lüddemann, Tobias; Egger, Jan
2016-03-01
Among all types of cancer, gynecological malignancies belong to the 4th most frequent type of cancer among women. Besides chemotherapy and external beam radiation, brachytherapy is the standard procedure for the treatment of these malignancies. In the progress of treatment planning, localization of the tumor as the target volume and adjacent organs of risks by segmentation is crucial to accomplish an optimal radiation distribution to the tumor while simultaneously preserving healthy tissue. Segmentation is performed manually and represents a time-consuming task in clinical daily routine. This study focuses on the segmentation of the rectum/sigmoid colon as an Organ-At-Risk in gynecological brachytherapy. The proposed segmentation method uses an interactive, graph-based segmentation scheme with a user-defined template. The scheme creates a directed two dimensional graph, followed by the minimal cost closed set computation on the graph, resulting in an outlining of the rectum. The graphs outline is dynamically adapted to the last calculated cut. Evaluation was performed by comparing manual segmentations of the rectum/sigmoid colon to results achieved with the proposed method. The comparison of the algorithmic to manual results yielded to a Dice Similarity Coefficient value of 83.85+/-4.08%, in comparison to 83.97+/-8.08% for the comparison of two manual segmentations of the same physician. Utilizing the proposed methodology resulted in a median time of 128 seconds per dataset, compared to 300 seconds needed for pure manual segmentation.
Choi, Yeon-Ju; Son, Wonsoo; Park, Ki-Su
2016-01-01
Objective This study used the intradural procedural time to assess the overall technical difficulty involved in surgically clipping an unruptured middle cerebral artery (MCA) aneurysm via a pterional or superciliary approach. The clinical and radiological variables affecting the intradural procedural time were investigated, and the intradural procedural time compared between a superciliary keyhole approach and a pterional approach. Methods During a 5.5-year period, patients with a single MCA aneurysm were enrolled in this retrospective study. The selection criteria for a superciliary keyhole approach included : 1) maximum diameter of the unruptured MCA aneurysm <15 mm, 2) neck diameter of the MCA aneurysm <10 mm, and 3) aneurysm location involving the sphenoidal or horizontal segment of MCA (M1) segment and MCA bifurcation, excluding aneurysms distal to the MCA genu. Meanwhile, the control comparison group included patients with the same selection criteria as for a superciliary approach, yet who preferred a pterional approach to avoid a postoperative facial wound or due to preoperative skin trouble in the supraorbital area. To determine the variables affecting the intradural procedural time, a multiple regression analysis was performed using such data as the patient age and gender, maximum aneurysm diameter, aneurysm neck diameter, and length of the pre-aneurysm M1 segment. In addition, the intradural procedural times were compared between the superciliary and pterional patient groups, along with the other variables. Results A total of 160 patients underwent a superciliary (n=124) or pterional (n=36) approach for an unruptured MCA aneurysm. In the multiple regression analysis, an increase in the diameter of the aneurysm neck (p<0.001) was identified as a statistically significant factor increasing the intradural procedural time. A Pearson correlation analysis also showed a positive correlation (r=0.340) between the neck diameter and the intradural procedural time. When comparing the superciliary and pterional groups, no statistically significant between-group difference was found in terms of the intradural procedural time reflecting the technical difficulty (mean±standard deviation : 29.8±13.0 min versus 27.7±9.6 min). Conclusion A superciliary keyhole approach can be a useful alternative to a pterional approach for an unruptured MCA aneurysm with a maximum diameter <15 mm and neck diameter <10 mm, representing no more of a technical challenge. For both surgical approaches, the technical difficulty increases along with the neck diameter of the MCA aneurysm. PMID:27847568
Measured noise reductions resulting from modified approach procedures for business jet aircraft
NASA Technical Reports Server (NTRS)
Burcham, F. W., Jr.; Putnam, T. W.; Lasagna, P. L.; Parish, O. O.
1975-01-01
Five business jet airplanes were flown to determine the noise reductions that result from the use of modified approach procedures. The airplanes tested were a Gulfstream 2, JetStar, Hawker Siddeley 125-400, Sabreliner-60 and LearJet-24. Noise measurements were made 3, 5, and 7 nautical miles from the touchdown point. In addition to a standard 3 deg glide slope approach, a 4 deg glide slope approach, a 3 deg glide slope approach in a low-drag configuration, and a two-segment approach were flown. It was found that the 4 deg approach was about 4 EPNdB quieter than the standard 3 deg approach. Noise reductions for the low-drag 3 deg approach varied widely among the airplanes tested, with an average of 8.5 EPNdB on a fleet-weighted basis. The two-segment approach resulted in noise reductions of 7 to 8 EPNdB at 3 and 5 nautical miles from touchdown, but only 3 EPNdB at 7 nautical miles from touchdown when the airplanes were still in level flight prior to glide slope intercept. Pilot ratings showed progressively increasing workload for the 4 deg, low-drag 3 deg, and two-segment approaches.
Liver hanging maneuver for right hemiliver in situ donation – anatomical considerations
Gadžijev, E.M.; Ravnik, D.; Hribernik, M.
2006-01-01
Background. An anatomical study was carried out to evaluate the safety of the liver hanging maneuver for the right hemiliver in living donor and in situ splitting transplantation. During this procedure a 4–6 cm blind dissection is performed between the inferior vena cava and the liver. Short subhepatic veins entering the inferior vena cava from segments 1 and 9 could be torn with consequent hemorrhage. Materials and methods. One hundred corrosive casts of livers were evaluated to establish the position and diameter of short subhepatic veins and the inferior right hepatic vein. Results. The average distance from the right border of the inferior vena cava to the opening of segment 1 veins was 16.7±3.4 mm and to the entrance of segment 9 veins was 5.0±0.5 mm. The width of the narrowest point on the route of blind dissection was determined, with the average value being 8.7±2.3 mm (range 2–15 mm). Discussion. The results show that the liver hanging maneuver is a safe procedure. A proposed route of dissection minimizes the risk of disrupting short subhepatic veins (7%). PMID:18333236
Robust and efficient fiducial tracking for augmented reality in HD-laparoscopic video streams
NASA Astrophysics Data System (ADS)
Mueller, M.; Groch, A.; Baumhauer, M.; Maier-Hein, L.; Teber, D.; Rassweiler, J.; Meinzer, H.-P.; Wegner, In.
2012-02-01
Augmented Reality (AR) is a convenient way of porting information from medical images into the surgical field of view and can deliver valuable assistance to the surgeon, especially in laparoscopic procedures. In addition, high definition (HD) laparoscopic video devices are a great improvement over the previously used low resolution equipment. However, in AR applications that rely on real-time detection of fiducials from video streams, the demand for efficient image processing has increased due to the introduction of HD devices. We present an algorithm based on the well-known Conditional Density Propagation (CONDENSATION) algorithm which can satisfy these new demands. By incorporating a prediction around an already existing and robust segmentation algorithm, we can speed up the whole procedure while leaving the robustness of the fiducial segmentation untouched. For evaluation purposes we tested the algorithm on recordings from real interventions, allowing for a meaningful interpretation of the results. Our results show that we can accelerate the segmentation by a factor of 3.5 on average. Moreover, the prediction information can be used to compensate for fiducials that are temporarily occluded or out of scope, providing greater stability.
Richardson, Sunil; Krishna, Shreya; Bansal, Avi
2017-12-01
The study was designed to evaluate the efficacy of performing a second, repeat anterior maxillary distraction (AMD) to treat residual cleft maxillary hypoplasia. Five patients between the ages of 12 to 15 years with a history of AMD and with residual cleft maxillary hypoplasia were included in the study. Inclusion was irrespective of gender, type of cleft lip and palate, and the amount of advancement needed. Repeat AMD was executed in these patients 4 to 5 years after the primary AMD procedure to correct the cleft maxillary hypoplasia that had developed since the initial procedure. Orthopantomogram (OPG) and lateral cephalograms were taken for evaluation preoperatively, immediately after distraction, after consolidation, and one year postoperatively. The data obtained was tabulated and a Mann Whitney U-test was used for statistical comparisons. At the time of presentation, a residual maxillary hypoplasia was observed with a well maintained distraction gap on the OPG which ruled out the occurrence of a relapse. Favorable movement of the segments without any resistance was seen in all patients. Mean maxillary advancement of 10.56 mm was achieved at repeat AMD. Statistically significant increases in midfacial length, SNA angle, and nasion perpendicular to point A distance was achieved ( P =0.012, P =0.011, and P =0.012, respectively). Good profile was achieved for all patients. Minimal transient complications, for example anterior open bite and bleeding episodes, were managed. Addressing the problem of cleft maxillary hypoplasia at an early age (12-15 years) is beneficial for the child. Residual hypoplasia may develop in some patients, which may require additional corrective procedures. The results of our study show that AMD can be repeated when residual deformity develops with the previous procedure having no negative impact on the results of the repeat procedure.
Development of the segment alignment maintenance system (SAMS) for the Hobby-Eberly Telescope
NASA Astrophysics Data System (ADS)
Booth, John A.; Adams, Mark T.; Ames, Gregory H.; Fowler, James R.; Montgomery, Edward E.; Rakoczy, John M.
2000-07-01
A sensing and control system for maintaining optical alignment of ninety-one 1-meter mirror segments forming the Hobby-Eberly Telescope (HET) primary mirror array is now under development. The Segment Alignment Maintenance System (SAMS) is designed to sense relative shear motion between each segment edge pair and calculated individual segment tip, tilt, and piston position errors. Error information is sent to the HET primary mirror control system, which corrects the physical position of each segment as often as once per minute. Development of SAMS is required to meet optical images quality specifications for the telescope. Segment misalignment over time is though to be due to thermal inhomogeneity within the steel mirror support truss. Challenging problems of sensor resolution, dynamic range, mechanical mounting, calibration, stability, robust algorithm development, and system integration must be overcome to achieve a successful operational solution.
Turner, Alex; Subramanian, Ramnath; Thomas, David F M; Hinley, Jennifer; Abbas, Syed Khawar; Stahlschmidt, Jens; Southgate, Jennifer
2011-03-01
Enterocystoplasty is associated with serious complications resulting from the chronic interaction between intestinal epithelium and urine. Composite cystoplasty is proposed as a means of overcoming these complications by substituting intestinal epithelium with tissue-engineered autologous urothelium. To develop a robust surgical procedure for composite cystoplasty and to determine if outcome is improved by transplantation of a differentiated urothelium. Bladder augmentation with in vitro-generated autologous tissues was performed in 11 female Large-White hybrid pigs in a well-equipped biomedical centre with operating facilities. Participants were a team comprising scientists, urologists, a veterinary surgeon, and a histopathologist. Urothelium harvested by open biopsy was expanded in culture and used to develop sheets of nondifferentiated or differentiated urothelium. The sheets were transplanted onto a vascularised, de-epithelialised, seromuscular colonic segment at the time of bladder augmentation. After removal of catheters and balloon at two weeks, voiding behaviour was monitored and animals were sacrificed at 3 months for immunohistology. Eleven pigs underwent augmentation, but four were lost to complications. Voiding behaviour was normal in the remainder. At autopsy, reconstructed bladders were healthy, lined by confluent urothelium, and showed no fibrosis, mucus, calculi, or colonic regrowth. Urothelial morphology was transitional with variable columnar attributes consistent between native and augmented segments. Bladders reconstructed with differentiated cell sheets had fewer lymphocytes infiltrating the lamina propria, indicating more effective urinary barrier function. The study endorses the potential for composite cystoplasty by (1) successfully developing reliable techniques for transplanting urothelium onto a prepared, vascularised, smooth muscle segment and (2) creating a functional urothelium-lined augmentation to overcome the complications of conventional enterocystoplasty. Copyright © 2010 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Turner, Alex; Subramanian, Ramnath; Thomas, David F.M.; Hinley, Jennifer; Abbas, Syed Khawar; Stahlschmidt, Jens; Southgate, Jennifer
2011-01-01
Background Enterocystoplasty is associated with serious complications resulting from the chronic interaction between intestinal epithelium and urine. Composite cystoplasty is proposed as a means of overcoming these complications by substituting intestinal epithelium with tissue-engineered autologous urothelium. Objective To develop a robust surgical procedure for composite cystoplasty and to determine if outcome is improved by transplantation of a differentiated urothelium. Design, setting, and participants Bladder augmentation with in vitro–generated autologous tissues was performed in 11 female Large-White hybrid pigs in a well-equipped biomedical centre with operating facilities. Participants were a team comprising scientists, urologists, a veterinary surgeon, and a histopathologist. Measurements Urothelium harvested by open biopsy was expanded in culture and used to develop sheets of nondifferentiated or differentiated urothelium. The sheets were transplanted onto a vascularised, de-epithelialised, seromuscular colonic segment at the time of bladder augmentation. After removal of catheters and balloon at two weeks, voiding behaviour was monitored and animals were sacrificed at 3 months for immunohistology. Results and limitations Eleven pigs underwent augmentation, but four were lost to complications. Voiding behaviour was normal in the remainder. At autopsy, reconstructed bladders were healthy, lined by confluent urothelium, and showed no fibrosis, mucus, calculi, or colonic regrowth. Urothelial morphology was transitional with variable columnar attributes consistent between native and augmented segments. Bladders reconstructed with differentiated cell sheets had fewer lymphocytes infiltrating the lamina propria, indicating more effective urinary barrier function. Conclusions The study endorses the potential for composite cystoplasty by (1) successfully developing reliable techniques for transplanting urothelium onto a prepared, vascularised, smooth muscle segment and (2) creating a functional urothelium-lined augmentation to overcome the complications of conventional enterocystoplasty. PMID:21195539
Constellation's First Flight Test: Ares I-X
NASA Technical Reports Server (NTRS)
Davis, Stephan R.; Askins, Bruce R.
2010-01-01
On October 28, 2009, NASA launched Ares I-X, the first flight test of the Constellation Program that will send human beings to the Moon and beyond. This successful test is the culmination of a three-and-a-half-year, multi-center effort to design, build, and fly the first demonstration vehicle of the Ares I crew launch vehicle, the successor vehicle to the Space Shuttle. The suborbital mission was designed to evaluate the atmospheric flight characteristics of a vehicle dynamically similar to Ares I; perform a first stage separation and evaluate its effects; characterize and control roll torque; stack, fly, and recover a solid-motor first stage testing the Ares I parachutes; characterize ground, flight, and reentry environments; and develop and execute new ground hardware and procedures. Built from existing flight and new simulator hardware, Ares I-X integrated a Shuttle-heritage four-segment solid rocket booster for first stage propulsion, a spacer segment to simulate a five-segment booster, Peacekeeper axial engines for roll control, and Atlas V avionics, as well as simulators for the upper stage, crew module, and launch abort system. The mission leveraged existing logistical and ground support equipment while also developing new ones to accommodate the first in-line rocket for flying astronauts since the Saturn IB last flew from Kennedy Space Center (KSC) in 1975. This paper will describe the development and integration of the various vehicle and ground elements, from conception to stacking in KSC s Vehicle Assembly Building; hardware performance prior to, during, and after the launch; and preliminary lessons and data gathered from the flight. While the Constellation Program is currently under review, Ares I-X has and will continue to provide vital lessons for NASA personnel in taking a vehicle concept from design to flight.
Prenatal development of the normal human vertebral corpora in different segments of the spine.
Nolting, D; Hansen, B F; Keeling, J; Kjaer, I
1998-11-01
Vertebral columns from 13 normal human fetuses (10-24 weeks of gestation) that had aborted spontaneously were investigated as part of the legal autopsy procedure. The investigation included spinal cord analysis. To analyze the formation of the normal human vertebral corpora along the spine, including the early location and disappearance of the notochord. Reference material on the development of the normal human vertebral corpora is needed for interpretation of published observations on prenatal malformations in the spine, which include observations of various types of malformation (anencephaly, spina bifida) and various genotypes (trisomy 18, 21 and 13, as well as triploidy). The vertebral columns were studied by using radiography (Faxitron X-ray apparatus, Faxitron Model 43,855, Hewlett Packard) in lateral, frontal, and axial views and histology (decalcification, followed by toluidine blue and alcian blue staining) in and axial view. Immunohistochemical marking with Keratin Wide Spectrum also was done. Notochordal tissue (positive on marking with Keratin Wide Spectrum [DAKO, Denmark]) was located anterior to the cartilaginous body center in the youngest fetuses. The process of disintegration of the notochord and the morphology of the osseous vertebral corpora in the lumbosacral, thoracic, and cervical segments are described. Marked differences appeared in axial views, which were verified on horizontal histologic sections. Also, the increase in size was different in the different segments, being most pronounced in the thoracic and upper lumbar bodies. The lower thoracic bodies were the first to ossify. The morphologic changes observed by radiography were verified histologically. In this study, normal prenatal standards were established for the early development of the vertebral column. These standards can be used in the future--for evaluation of pathologic deviations in the human vertebral column in the second trimester.
NASA Astrophysics Data System (ADS)
Nozka, L.; Hiklova, H.; Horvath, P.; Hrabovsky, M.; Mandat, D.; Palatka, M.; Pech, M.; Ridky, J.; Schovanek, P.
2018-05-01
We present results of the monitoring method we have used to characterize the optical performance deterioration due to the dust of our mirror segments produced for fluorescence detectors used in astrophysics experiments. The method is based on the measurement of scatter profiles of reflected light. The scatter profiles and the reflectivity of the mirror segments sufficiently describe the performance of the mirrors from the perspective of reconstruction algorithms. The method is demonstrated on our mirror segments installed in frame of the Pierre Auger Observatory project. Although installed in air-conditioned buildings, both the dust sedimentation and the natural aging of the reflective layer deteriorate the optical throughput of the segments. In the paper, we summarized data from ten years of operation of the fluorescence detectors. During this time, we periodically measured in-situ scatter characteristics represented by the specular reflectivity and the reflectivity of the diffusion part at the wavelength of 670 nm of the segment surface (measured by means of the optical scatter technique as well). These measurements were extended with full Bidirectional Reflectance Distribution Functions (BRDF) profiles of selected segments made in the laboratory. Cleaning procedures are also discussed in the paper.
Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis
NASA Astrophysics Data System (ADS)
Che, E.; Olsen, M. J.
2017-09-01
Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.
Patient-specific cardiac phantom for clinical training and preprocedure surgical planning.
Laing, Justin; Moore, John; Vassallo, Reid; Bainbridge, Daniel; Drangova, Maria; Peters, Terry
2018-04-01
Minimally invasive mitral valve repair procedures including MitraClip ® are becoming increasingly common. For cases of complex or diseased anatomy, clinicians may benefit from using a patient-specific cardiac phantom for training, surgical planning, and the validation of devices or techniques. An imaging compatible cardiac phantom was developed to simulate a MitraClip ® procedure. The phantom contained a patient-specific cardiac model manufactured using tissue mimicking materials. To evaluate accuracy, the patient-specific model was imaged using computed tomography (CT), segmented, and the resulting point cloud dataset was compared using absolute distance to the original patient data. The result, when comparing the molded model point cloud to the original dataset, resulted in a maximum Euclidean distance error of 7.7 mm, an average error of 0.98 mm, and a standard deviation of 0.91 mm. The phantom was validated using a MitraClip ® device to ensure anatomical features and tools are identifiable under image guidance. Patient-specific cardiac phantoms may allow for surgical complications to be accounted for preoperative planning. The information gained by clinicians involved in planning and performing the procedure should lead to shorter procedural times and better outcomes for patients.
Automated side-chain model building and sequence assignment by template matching.
Terwilliger, Thomas C
2003-01-01
An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.
On Estimation Strategies in an Inverse ELF Problem
NASA Astrophysics Data System (ADS)
Mushtak, Vadim; Williams, Earle; Boldi, Robert; Nagy, Tamas
2010-05-01
Since 1965 when Balser and Wagner, the pioneer ELF experimentalists, noticed the reflection of the properties of global lightning activity in their measurements, one of the most important and challenging tasks in the ELF research is the monitoring of the world-wide lightning activity from observations in the Schumann resonance (SR) frequency range (5 to about 40 Hz). Known attempts in this direction have been undertaken using a simplified theory of ELF propagation in a spherically symmetrical Earth-ionosphere cavity. Yet numerical simulations with more realistic ELF techniques show that incorporating into the theory the cavity's major asymmetry, the day/night one, not only improves the accuracy of the monitoring procedure, but also enhances its efficiency. The reason is that the presence of asymmetries provides - via the positions of sources and observer relative not only to each other, but also to the day/night terminator, - additional "dimensions" to the task in comparison with the symmetrical case, which, in its turn, improves the convergence of the inversion procedure. The realization of the theoretically achievable efficiency of such an inversion with real SR data depends critically on the quality of measurements. After collecting and analyzing ELF data from SR stations in various regions of the globe, it was found that even under seemingly most favorable experimental conditions the SR characteristics directly estimated from ELF observations rarely have a quality acceptable for use in the inversion. A three-stage rectifying algorithm has been developed and tested in the inversion procedure. In the first stage, the data - in the form of time series, - instead of being directly Fourier-transformed for estimating the SR characteristics, are divided into shorter segments, and histograms of the segments' energy content (EC) are considered for revealing the possible presence of various interferences and the "non-systematic" (i.e. not incorporated into the source model) components. On the basis of statistical properties of the EC histograms, "credibility diagrams" - the SR characteristics vs. the segments' EC threshold - are being computed and analyzed, the characteristics' stability (respectively, instability) with the threshold being an indicator of low (respectively, high) presence of the interference/non-systematic constituent. If the diagrams are not stable enough, a more detailed analysis is being carried out in the third stage for revealing and eliminating as far as possible the instability's cause. The efficiency of the rectifying procedure is demonstrated via an improved convergence of the inversion procedure with real-world data from a global network of SR stations in Europe, North America, Asia, and Antarctica. The authors are grateful to all the SR investigators who have provided their observations for use in this study.
Martin, Sébastien; Troccaz, Jocelyne; Daanenc, Vincent
2010-04-01
The authors present a fully automatic algorithm for the segmentation of the prostate in three-dimensional magnetic resonance (MR) images. The approach requires the use of an anatomical atlas which is built by computing transformation fields mapping a set of manually segmented images to a common reference. These transformation fields are then applied to the manually segmented structures of the training set in order to get a probabilistic map on the atlas. The segmentation is then realized through a two stage procedure. In the first stage, the processed image is registered to the probabilistic atlas. Subsequently, a probabilistic segmentation is obtained by mapping the probabilistic map of the atlas to the patient's anatomy. In the second stage, a deformable surface evolves toward the prostate boundaries by merging information coming from the probabilistic segmentation, an image feature model and a statistical shape model. During the evolution of the surface, the probabilistic segmentation allows the introduction of a spatial constraint that prevents the deformable surface from leaking in an unlikely configuration. The proposed method is evaluated on 36 exams that were manually segmented by a single expert. A median Dice similarity coefficient of 0.86 and an average surface error of 2.41 mm are achieved. By merging prior knowledge, the presented method achieves a robust and completely automatic segmentation of the prostate in MR images. Results show that the use of a spatial constraint is useful to increase the robustness of the deformable model comparatively to a deformable surface that is only driven by an image appearance model.
Martin, Mario; Béjar, Javier; Esposito, Gennaro; Chávez, Diógenes; Contreras-Hernández, Enrique; Glusman, Silvio; Cortés, Ulises; Rudomín, Pablo
2017-01-01
In a previous study we developed a Machine Learning procedure for the automatic identification and classification of spontaneous cord dorsum potentials ( CDPs ). This study further supported the proposal that in the anesthetized cat, the spontaneous CDPs recorded from different lumbar spinal segments are generated by a distributed network of dorsal horn neurons with structured (non-random) patterns of functional connectivity and that these configurations can be changed to other non-random and stable configurations after the noceptive stimulation produced by the intradermic injection of capsaicin in the anesthetized cat. Here we present a study showing that the sequence of identified forms of the spontaneous CDPs follows a Markov chain of at least order one. That is, the system has memory in the sense that the spontaneous activation of dorsal horn neuronal ensembles producing the CDPs is not independent of the most recent activity. We used this markovian property to build a procedure to identify portions of signals as belonging to a specific functional state of connectivity among the neuronal networks involved in the generation of the CDPs . We have tested this procedure during acute nociceptive stimulation produced by the intradermic injection of capsaicin in intact as well as spinalized preparations. Altogether, our results indicate that CDP sequences cannot be generated by a renewal stochastic process. Moreover, it is possible to describe some functional features of activity in the cord dorsum by modeling the CDP sequences as generated by a Markov order one stochastic process. Finally, these Markov models make possible to determine the functional state which produced a CDP sequence. The proposed identification procedures appear to be useful for the analysis of the sequential behavior of the ongoing CDPs recorded from different spinal segments in response to a variety of experimental procedures including the changes produced by acute nociceptive stimulation. They are envisaged as a useful tool to examine alterations of the patterns of functional connectivity between dorsal horn neurons under normal and different pathological conditions, an issue of potential clinical concern.
Methodology for the systematic reviews on an adjacent segment pathology.
Norvell, Daniel C; Dettori, Joseph R; Skelly, Andrea C; Riew, K Daniel; Chapman, Jens R; Anderson, Paul A
2012-10-15
A systematic review. To provide a detailed description of the methods undertaken in the systematic search and analytical summary of adjacent segment pathology (ASP) issues and to describe the process used to develop consensus statements and clinical recommendations regarding factors associated with the prevention and treatment of ASP. We present methods used in conducting the systematic, evidence-based reviews and development of expert panel consensus statements and clinical recommendations on the classification, natural history, risk factors, and treatment of radiographical and clinical ASP. Our intent is that clinicians will combine the information from these reviews with an understanding of their own capacities and experience to better manage patients at risk of ASP and consider future research for the prevention and treatment of ASP. A systematic search and critical review of the English-language literature was undertaken for articles published on the classification, risk, risk factors, and treatment of radiographical and clinical ASP. Articles were screened for relevance using a priori criteria, and relevant articles were critically reviewed. Whether an article was included for review depended on whether the study question was descriptive, one of therapy, or one of prognosis. The strength of evidence for the overall body of literature in each topic area was determined by 2 independent reviewers considering risk of bias, consistency, directness, and precision of results using a modification of the Grades of Recommendation Assessment, Development and Evaluation (GRADE) criteria. Disagreements were resolved by consensus. Findings from articles meeting inclusion criteria were summarized. From these summaries, consensus statements or clinical recommendations were formulated among subject experts through a modified Delphi process using the GRADE approach. A total of 3382 articles were identified and screened on 14 topics relating to the classification, risks, risk factors, and treatment of radiographical and clinical ASP. Of these, 127 met our predetermined inclusion criteria and were used to answer specific clinical questions within each topic. Lack of precision in the terminology related to adjacent segment disease and critical evaluation of definitions used across included articles led to a consensus to use ASP and suggest it as a standard. No validated comprehensive classification system for ASP currently exists. The expert panel developed a consensus definition of radiographical and clinical ASP (RASP and CASP). Some of the highlights from the analyses included the annual, 5- and 10-year risks of developing cervical and lumbar ASP after surgery, several important risk factors associated with the development of cervical and lumbar ASP, and the possibility that some motion sparing procedures may be associated with a lower risk of ASP compared with fusion despite kinematic studies demonstrating similar adjacent segment mobility following these procedures. Other highlights included a high risk of proximal junctional kyphosis (PJK) following long fusions for deformity correction, postsurgical malalignment as a potential risk factor for RASP and the paucity of studies on treatment of cervical and lumbar ASP. Systematic reviews were undertaken to understand the classification, risks, risk factors, and treatment of RASP and CASP and to provide consensus statements and clinical recommendations. This article reports the methods used in the reviews.
Flight evaluation of two segment approaches for jet transport noise abatement
NASA Technical Reports Server (NTRS)
Rogers, R. A.; Wohl, B.; Gale, C. M.
1973-01-01
A 75 flight-hour operational evaluation was conducted with a representative four-engine fan-jet transport in a representative airport environment. The flight instrument systems were modified to automatically provide pilots with smooth and continuous pitch steering command information during two-segment approaches. Considering adverse weather, minimum ceiling and flight crew experience criteria, a transition initiation altitude of approximately 800 feet AFL would have broadest acceptance for initiating two-segment approach procedures in scheduled service. The profile defined by the system gave an upper glidepath of approximately 6 1/2 degrees. This was 1/2 degree greater than inserted into the area navigation system. The glidepath error is apparently due to an erroneous along-track, distance-to-altitude profile.
Pravastatin and endothelium dependent vasomotion after coronary angioplasty: the PREFACE trial
Mulder, H; Schalij, M; Kauer, B; Visser, R; van Dijkman, P R M; Jukema, J; Zwinderman, A; Bruschke, A
2001-01-01
OBJECTIVE—To test the hypothesis that the 3-hydroxy-3-methylglutaryl coenzyme-A reductase inhibitor pravastatin ameliorates endothelium mediated responses of dilated coronary segments: the PREFACE (pravastatin related effects following angioplasty on coronary endothelium) trial. DESIGN—A double blind, randomised, placebo controlled, multicentre study. SETTING—Four hospitals in the Netherlands. PATIENTS—63 non-smoking, non-hypercholesterolaemic patients scheduled for elective balloon angioplasty (pravastatin 34, placebo 29). INTERVENTIONS—The effects of three months of pravastatin treatment (40 mg daily) on endothelium dependent vasomotor function were studied. Balloon angioplasty was undertaken one month after randomisation, and coronary vasomotor function tests using acetylcholine were performed two months after balloon angioplasty. The angiograms were analysed quantitatively. MAIN OUTCOME MEASURES—The efficacy measure was the acetylcholine induced change in mean arterial diameter, determined in the dilated segment and in an angiographically normal segment of an adjacent non-manipulated coronary artery. RESULTS—Increasing acetylcholine doses produced vasoconstriction in the dilated segments (p = 0.004) but not in the normal segments. Pravastatin did not affect the vascular response to acetylcholine in either the dilated segments (p = 0.09) or the non-dilated sites. Endothelium dependent vasomotion in normal segments was correlated with that in dilated segments (r = 0.47, p < 0.001). There were fewer procedure related events in the pravastatin group than in the placebo group (p < 0.05). CONCLUSIONS—Endothelium dependent vasomotion in normal segments is correlated with that in dilated segments. A significant beneficial effect of pravastatin on endothelial function could not be shown, but in the dilated segments there was a trend towards a beneficial treatment effect in the pravastatin group. Keywords: angioplasty; endothelium; acetylcholine; pravastatin PMID:11602546
Karimi, Noureddin; Akbarov, Parvin; Rahnama, Leila
2017-01-01
Low Back Pain (LBP) is considered as one of the most frequent disorders, which about 80% of adults experience in their lives. Lumbar disc herniation (LDH) is a cause for acute LBP. Among conservative treatments, traction is frequently used by clinicians to manage LBP resulting from LDH. However, there is still a lack of consensus about its efficacy. The purpose of this study was to evaluate the effects of segmental traction therapy on lumbar discs herniation, pain, lumbar range of motion (ROM), and back extensor muscles endurance in patients with acute LBP induced by LDH. Fifteen patients with acute LBP diagnosed by LDH participated in the present study. Participants undertook 15 sessions of segmental traction therapy along with conventional physiotherapy, 5 times a week for 3 weeks. Lumbar herniated mass size was measured before and after the treatment protocol using magnetic resonance imaging. Furthermore, pain, lumbar ROM and back muscle endurance were evaluated before and after the procedure using clinical outcome measures. Following the treatment protocol, herniated mass size and patients' pain were reduced significantly. In addition, lumbar flexion ROM showed a significant improvement. However, no significant change was observed for back extensor muscle endurance after the treatment procedure. The result of the present study showed segmental traction therapy might play an important role in the treatment of acute LBP stimulated by LDH.
Piché, Mathieu; Benoît, Pierre; Lambert, Julie; Barrette, Virginie; Grondin, Emmanuelle; Martel, Julie; Paré, Amélie; Cardin, André
2007-01-01
The objective of this study was to develop a measurement method that could be implemented in chiropractic for the evaluation of angular and translational intervertebral motion of the cervical spine. Flexion-extension radiographs were digitized with a scanner at a ratio of 1:1 and imported into a software, allowing segmental motion measurements. The measurements were obtained by selecting the most anteroinferior point and the most posteroinferior point of a vertebral body (anterior and posterior arch, respectively, for C1), with the origin of the reference frame set at the most posteroinferior point of the vertebral body below. The same procedure was performed for both the flexion and extension radiographs, and the coordinates of the 2 points were used to calculate the angular movement and the translation between the 2 vertebrae. This method provides a measure of intervertebral angular and translational movement. It uses a different reference frame for each joint instead of the same reference frame for all joints and thus provides a measure of motion in the plane of each articulation. The calculated values obtained are comparable to other studies on intervertebral motion and support further development to validate the method. The present study proposes a computerized procedure to evaluate intervertebral motion of the cervical spine. This procedure needs to be validated with a reliability study but could provide a valuable tool for doctors of chiropractic and further spinal research.
Experimental investigation of the structural behavior of equine urethra.
Natali, Arturo Nicola; Carniel, Emanuele Luigi; Frigo, Alessandro; Fontanella, Chiara Giulia; Rubini, Alessandro; Avital, Yochai; De Benedictis, Giulia Maria
2017-04-01
An integrated experimental and computational investigation was developed aiming to provide a methodology for characterizing the structural response of the urethral duct. The investigation provides information that are suitable for the actual comprehension of lower urinary tract mechanical functionality and the optimal design of prosthetic devices. Experimental activity entailed the execution of inflation tests performed on segments of horse penile urethras from both proximal and distal regions. Inflation tests were developed imposing different volumes. Each test was performed according to a two-step procedure. The tubular segment was inflated almost instantaneously during the first step, while volume was held constant for about 300s to allow the development of relaxation processes during the second step. Tests performed on the same specimen were interspersed by 600s of rest to allow the recovery of the specimen mechanical condition. Results from experimental activities were statistically analyzed and processed by means of a specific mechanical model. Such computational model was developed with the purpose of interpreting the general pressure-volume-time response of biologic tubular structures. The model includes parameters that interpret the elastic and viscous behavior of hollow structures, directly correlated with the results from the experimental activities. Post-processing of experimental data provided information about the non-linear elastic and time-dependent behavior of the urethral duct. In detail, statistically representative pressure-volume and pressure relaxation curves were identified, and summarized by structural parameters. Considering elastic properties, initial stiffness ranged between 0.677 ± 0.026kPa and 0.262 ± 0.006kPa moving from proximal to distal region of penile urethra. Viscous parameters showed typical values of soft biological tissues, as τ 1 =0.153±0.018s, τ 2 =17.458 ± 1.644s and τ 1 =0.201 ± 0.085, τ 2 = 8.514 ± 1.379s for proximal and distal regions respectively. A general procedure for the mechanical characterization of the urethral duct has been provided. The proposed methodology allows identifying mechanical parameters that properly express the mechanical behavior of the biological tube. The approach is especially suitable for evaluating the influence of degenerative phenomena on the lower urinary tract mechanical functionality. The information are mandatory for the optimal design of potential surgical procedures and devices. Copyright © 2017 Elsevier B.V. All rights reserved.
Time-lapse microscopy and image processing for stem cell research: modeling cell migration
NASA Astrophysics Data System (ADS)
Gustavsson, Tomas; Althoff, Karin; Degerman, Johan; Olsson, Torsten; Thoreson, Ann-Catrin; Thorlin, Thorleif; Eriksson, Peter
2003-05-01
This paper presents hardware and software procedures for automated cell tracking and migration modeling. A time-lapse microscopy system equipped with a computer controllable motorized stage was developed. The performance of this stage was improved by incorporating software algorithms for stage motion displacement compensation and auto focus. The microscope is suitable for in-vitro stem cell studies and allows for multiple cell culture image sequence acquisition. This enables comparative studies concerning rate of cell splits, average cell motion velocity, cell motion as a function of cell sample density and many more. Several cell segmentation procedures are described as well as a cell tracking algorithm. Statistical methods for describing cell migration patterns are presented. In particular, the Hidden Markov Model (HMM) was investigated. Results indicate that if the cell motion can be described as a non-stationary stochastic process, then the HMM can adequately model aspects of its dynamic behavior.
Image Fusion and 3D Roadmapping in Endovascular Surgery.
Jones, Douglas W; Stangenberg, Lars; Swerdlow, Nicholas J; Alef, Matthew; Lo, Ruby; Shuja, Fahad; Schermerhorn, Marc L
2018-05-21
Practitioners of endovascular surgery have historically utilized two-dimensional (2D) intraoperative fluoroscopic imaging, with intra-vascular contrast opacification, to treat complex three-dimensional (3D) pathology. Recently, major technical developments in intraoperative imaging have made image fusion techniques possible: the creation of a 3D patient-specific vascular roadmap based on preoperative imaging which aligns with intraoperative fluoroscopy, with many potential benefits. First, a 3D model is segmented from preoperative imaging, typically a CT scan. The model is then used to plan for the procedure, with placement of specific markers and storing of C-arm angles that will be used for intra-operative guidance. At the time of the procedure, an intraoperative cone-beam CT is performed and the 3D model is registered to the patient's on-table anatomy. Finally, the system is used for live guidance where the 3D model is codisplayed overlying fluoroscopic images. Copyright © 2018. Published by Elsevier Inc.
Vertical muscle transposition with silicone band belting in VI nerve palsy
Freitas, Cristina
2016-01-01
A woman aged 60 years developed a Millard-Gubler syndrome after a diagnosis of a cavernous angioma in the median and paramedian areas of the pons. In this context, she presented a right VI nerve palsy, right conjugate gaze palsy, facial palsy and left hemiparesis. To improve the complete VI nerve palsy, we planned a modified transposition approach, in which procedure we made a partial transposition of vertical rectus with a silicone band that was fixated posteriorly. After the procedure, the patient gained the ability to slightly abduct the right eye. We found no compensatory torticollis in the primary position of gaze. There was also an improvement of elevation and depression movements of the right eye. We obtained satisfactory results with a theoretically reversible technique, which is adjustable intraoperatively with no need of muscle detachment, preventing anterior segment ischaemia and allowing simultaneous recession of the medial rectus muscles, if necessary. PMID:27974341
Traumatic laryngotracheal stenosis--an alternative surgical technique.
Syal, Rajan; Tyagi, Isha; Goyal, Amit
2006-02-01
Reconstruction of combined laryngotracheal stenosis requires complex techniques including resection and incorporation of grafts and stents that can be performed as single or multistaged procedure. A complicated case of traumatic laryngotracheal stenosis was managed by us, surgical technique is discussed. A 16-year-old male presented with Stage-3 laryngotracheal stenosis of grade-3 to 4 (>70% of the complete obstruction of tracheal lumen) of 5 cm segment of the larynx and trachea. Restoration of the critical functions of respiration and phonation was achieved in this patient by resection anastomosis of the trachea and with subglottic remodeling. Resection of 5 cm long segment of trachea and primary anastomosis in this case would have created tension at the site of anastomosis. So we did tracheal resection of 3 cm segment of trachea along with subglottic remodeling instead of removing the 5 cm segment of stenosed laryngotracheal region and doing thyrotracheal anastomosis. In complicated long segment, laryngotracheal stenosis, tracheal resection and subglottic remodeling with primary anastomosis can be an alternative approach. Fibrin glue can be used to support free bone/cartilage grafts in laryngotracheal reconstructions.
Laser welding of removable partial denture frameworks.
Brudvik, James S; Lee, Seungbum; Croshaw, Steve N; Reimers, Donald L; Reimers, Dave L
2008-01-01
To identify and measure distortions inherent in the casting process of a Class III mandibular cobalt-chromium (Co-Cr) framework to illustrate the problems faced by the laboratory technician and the clinician and to measure the changes that occur during the correction of the fit discrepancy using laser welding. Five identical castings of a Co-Cr alloy partial denture casting were made and measured between 3 widely separated points using the x, y, and z adjustments of a Nikon Measurescope. The same measurements were made after each of the following clinical and laboratory procedures: sprue removal, sectioning of the casting into 3 parts through the posterior meshwork, fitting the segments to the master cast, picking up the segments using resin, and laser welding of the 3 segments. Measurements of all 5 castings showed a cross-arch decrease after sprue removal, an increase after fitting the segments to the master cast, and a slight decrease after resin pickup and laser welding. Within the limitations of this study, the findings suggest that precise tooth-frame relations can be established by resin pickup and laser welding of segments of Co-Cr removable partial denture frameworks.
NASA Technical Reports Server (NTRS)
Carnes, J. G.; Baird, J. E. (Principal Investigator)
1980-01-01
The classification procedure utilized in making crop proportion estimates for corn and soybeans using remotely sensed data was evaluated. The procedure was derived during the transition year of the Large Area Crop Inventory Experiment. Analysis of variance techniques were applied to classifications performed by 3 groups of analysts who processed 25 segments selected from 4 agrophysical units (APU's). Group and APU effects were assessed to determine factors which affected the quality of the classifications. The classification results were studied to determine the effectiveness of the procedure in producing corn and soybeans proportion estimates.
Gebreyesus, Grum; Lund, Mogens S; Buitenhuis, Bart; Bovenhuis, Henk; Poulsen, Nina A; Janss, Luc G
2017-12-05
Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls. Single-nucleotide polymorphisms (SNPs), from 50K SNP arrays, were grouped into non-overlapping genome segments. A segment was defined as one SNP, or a group of 50, 100, or 200 adjacent SNPs, or one chromosome, or the whole genome. Traditional univariate and bivariate genomic best linear unbiased prediction (GBLUP) models were also run for comparison. Reliabilities were calculated through a resampling strategy and using deterministic formula. BayesAS models improved prediction reliability for most of the traits compared to GBLUP models and this gain depended on segment size and genetic architecture of the traits. The gain in prediction reliability was especially marked for the protein composition traits β-CN, κ-CN and β-LG, for which prediction reliabilities were improved by 49 percentage points on average using the MT-BayesAS model with a 100-SNP segment size compared to the bivariate GBLUP. Prediction reliabilities were highest with the BayesAS model that uses a 100-SNP segment size. The bivariate versions of our BayesAS models resulted in extra gains of up to 6% in prediction reliability compared to the univariate versions. Substantial improvement in prediction reliability was possible for most of the traits related to milk protein composition using our novel BayesAS models. Grouping adjacent SNPs into segments provided enhanced information to estimate parameters and allowing the segments to have different (co)variances helped disentangle heterogeneous (co)variances across the genome.
Wang, Y Y; Li, T; Liu, Y W; Liu, B J; Hu, X M; Wang, Y; Gao, W Q; Wu, P; Huang, L; Li, X; Peng, W J; Ning, M
2017-04-24
Objective: To evaluate the effect of the ischemic post-conditioning (IPC) on the prevention of the cardio-renal damage in patients with acute ST-segment elevation myocardial infarction (STEMI) after primary percutaneous coronary intervention (PPCI). Methods: A total of 251 consecutive STEMI patients underwent PPCI in the heart center of Tianjin Third Central Hospital from January 2012 to June 2014 were enrolled in this prospective, randomized, control, single-blinded, clinical registry study. Patients were randomly divided into IPC group (123 cases) and control group (128 cases) with random number table. Patients in IPC group underwent three times of inflation/deflation with low inflation pressure using a balloon catheter within one minute after culprit vessel blood recovery, and then treated by PPCI. Patients in control group received PPCI procedure directly. The basic clinical characteristics, incidence of reperfusion arrhythmia during the procedure, the rate of electrocardiogram ST-segment decline, peak value of myocardial necrosis markers, incidence of contrast induced acute kidney injury(CI-AKI), and one-year major adverse cardiovascular events(MACE) which including myocardial infarction again, malignant arrhythmia, rehospitalization for heart failure, repeat revascularization, stroke, and death after the procedure were analyzed between the two groups. Results: The age of IPC group and control group were comparable((61.2±12.6) vs. (64.2±12.1) years old, P =0.768). The incidence of reperfusion arrhythmia during the procedure was significantly lower in the IPC group than in the control group(42.28% (52/123) vs. 57.03% (73/128), P =0.023). The rate of electrocardiogram ST-segment decline immediately after the procedure was significantly higher in the IPC group than in the control group (77.24% (95/123) vs. 64.84% (83/128), P =0.037). The peak value of myocardial necrosis markers after the procedure were significantly lower in the IPC group than in the control group(creatine kinase: 1 257 (682, 2 202) U/L vs. 1 737(794, 2 816)U/L, P =0.029; creatine kinase-MB: 123(75, 218)U/L vs.165(95, 288)U/L, P =0.010). The rate of CI-AKI after the procedure was significantly lower in the IPC group than in the control group(5.69%(7/123) vs. 14.06%(18/128), P =0.034). The rate of the one-year MACE was significantly lower in the IPC group than in the control group(7.32%(9/123) vs. 15.63% (20/128), P =0.040). Conclusion: The IPC strategy performed eight before PPCI can reduce myocardial ischemia- reperfusion injury, decline the rates of CI-AKI and one-year MACE significantly in STEMI patients, thus has a significant protective effect on heart and kidney in STEMI patients. Clinical Trial Registration Chinese Clinical Trials Registry, ChiCTR-ICR-15006590.
[The advancement of robotic surgery--successes, failures, challenges].
Haidegger, Tamás
2010-10-10
Computer-integrated robotic surgery systems appeared more than twenty years ago and since then hundreds of different prototypes have been developed. Only a fraction of them have been commercialized, mostly to support neurosurgical and orthopaedic procedures.Unquestionably, the most successful one is the da Vinci surgical system, primarily deployed in urology and general laparoscopic surgery. It is developed and marketed by Intuitive Surgical Inc. (Sunnyvale, CA, USA), the only profitable company of the segment. The da Vinci made robotic surgery is known and acknowledged throughout the world, and the great results delivered convinced most of the former critics of the technology. Success derived from the well chosen business development strategy, proficiency of the developers, appropriate timing and a huge pot of luck. This article presents the most important features of the da Vinci system, the history of development along with its medical, economical and financial aspects, and seeks the answer why this particular system became successful.
NASA Astrophysics Data System (ADS)
Favaro, Alberto; Lad, Akash; Formenti, Davide; Zani, Davide Danilo; De Momi, Elena
2017-03-01
In a translational neuroscience/neurosurgery perspective, sheep are considered good candidates to study because of the similarity between their brain and the human one. Automatic planning systems for safe keyhole neurosurgery maximize the probe/catheter distance from vessels and risky structures. This work consists in the development of a trajectories planner for straight catheters placement intended to be used for investigating the drug diffusivity mechanisms in sheep brain. Automatic brain segmentation of gray matter, white matter and cerebrospinal fluid is achieved using an online available sheep atlas. Ventricles, midbrain and cerebellum segmentation have been also carried out. The veterinary surgeon is asked to select a target point within the white matter to be reached by the probe and to define an entry area on the brain cortex. To mitigate the risk of hemorrhage during the insertion process, which can prevent the success of the insertion procedure, the trajectory planner performs a curvature analysis of the brain cortex and wipes out from the poll of possible entry points the sulci, as part of brain cortex where superficial blood vessels are naturally located. A limited set of trajectories is then computed and presented to the surgeon, satisfying an optimality criteria based on a cost function which considers the distance from critical brain areas and the whole trajectory length. The planner proved to be effective in defining rectilinear trajectories accounting for the safety constraints determined by the brain morphology. It also demonstrated a short computational time and good capability in segmenting gyri and sulci surfaces.
Bi, Yonghua; Chen, Hongmei; Ding, Penxu; Ren, Jianzhuang; Han, Xinwei
2018-05-30
To compare long-term outcomes of retrievable stents and permanent stents for BCS due to long-segment obstructive IVC. Between July 2000 and August 2016, 42 patients with BCS due to long-segment obstructive IVC were treated with retrievable stents (RS) and 41 patients were treated with permanent stents (PS). The retrievable stents was removed eventually after thrombus disappeared. Patients were subsequently followed-up by color Doppler sonography or CT scanning. All retrievable stent placements were successfully, and 37 retrievable stents were retrieved 8 to 29 days later. Forty-two stents were implanted in PS Group. One failure retrieval of retrievable stents occurred, and two failures of cannulations were found in PS Group. Two deaths may procedure-related and died from acute pulmonary thromboembolism perioperatively. One patient developed acute cerebral infarction and recovered after treatment. In PS Group, minor complications were found in 3 patients. The length of IVC lesion segment, length and thickness of IVC thrombus decreased significantly, and diameter of retrocaval IVC and diaphragm IVC increased significantly in both groups. During follow up, 3 patients died from liver failure in RS Group and 2 patients died in PS Group. RS Group showed a significantly higher primary patency rate than PS Group. Cumulative 1-, 3-, and 5-year secondary patency rates were 95.2%, 89.6%, 89.6% in RS Group, and 100%, 96.6%, 96.6% in PS Group (p= 0.7109). Retrievable stents is effective for BCS due to long-segment obstructive IVC, with a higher primary patency rate. This article is protected by copyright. All rights reserved.
Quantification of intraventricular blood clot in MR-guided focused ultrasound surgery
NASA Astrophysics Data System (ADS)
Hess, Maggie; Looi, Thomas; Lasso, Andras; Fichtinger, Gabor; Drake, James
2015-03-01
Intraventricular hemorrhage (IVH) affects nearly 15% of preterm infants. It can lead to ventricular dilation and cognitive impairment. To ablate IVH clots, MR-guided focused ultrasound surgery (MRgFUS) is investigated. This procedure requires accurate, fast and consistent quantification of ventricle and clot volumes. We developed a semi-autonomous segmentation (SAS) algorithm for measuring changes in the ventricle and clot volumes. Images are normalized, and then ventricle and clot masks are registered to the images. Voxels of the registered masks and voxels obtained by thresholding the normalized images are used as seed points for competitive region growing, which provides the final segmentation. The user selects the areas of interest for correspondence after thresholding and these selections are the final seeds for region growing. SAS was evaluated on an IVH porcine model. SAS was compared to ground truth manual segmentation (MS) for accuracy, efficiency, and consistency. Accuracy was determined by comparing clot and ventricle volumes produced by SAS and MS, and comparing contours by calculating 95% Hausdorff distances between the two labels. In Two-One-Sided Test, SAS and MS were found to be significantly equivalent (p < 0.01). SAS on average was found to be 15 times faster than MS (p < 0.01). Consistency was determined by repeated segmentation of the same image by both SAS and manual methods, SAS being significantly more consistent than MS (p < 0.05). SAS is a viable method to quantify the IVH clot and the lateral brain ventricles and it is serving in a large-scale porcine study of MRgFUS treatment of IVH clot lysis.
Lemole, G Michael; Banerjee, P Pat; Luciano, Cristian; Neckrysh, Sergey; Charbel, Fady T
2007-07-01
Mastery of the neurosurgical skill set involves many hours of supervised intraoperative training. Convergence of political, economic, and social forces has limited neurosurgical resident operative exposure. There is need to develop realistic neurosurgical simulations that reproduce the operative experience, unrestricted by time and patient safety constraints. Computer-based, virtual reality platforms offer just such a possibility. The combination of virtual reality with dynamic, three-dimensional stereoscopic visualization, and haptic feedback technologies makes realistic procedural simulation possible. Most neurosurgical procedures can be conceptualized and segmented into critical task components, which can be simulated independently or in conjunction with other modules to recreate the experience of a complex neurosurgical procedure. We use the ImmersiveTouch (ImmersiveTouch, Inc., Chicago, IL) virtual reality platform, developed at the University of Illinois at Chicago, to simulate the task of ventriculostomy catheter placement as a proof-of-concept. Computed tomographic data are used to create a virtual anatomic volume. Haptic feedback offers simulated resistance and relaxation with passage of a virtual three-dimensional ventriculostomy catheter through the brain parenchyma into the ventricle. A dynamic three-dimensional graphical interface renders changing visual perspective as the user's head moves. The simulation platform was found to have realistic visual, tactile, and handling characteristics, as assessed by neurosurgical faculty, residents, and medical students. We have developed a realistic, haptics-based virtual reality simulator for neurosurgical education. Our first module recreates a critical component of the ventriculostomy placement task. This approach to task simulation can be assembled in a modular manner to reproduce entire neurosurgical procedures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burge, S.W.
This report describes the theory and structure of the FORCE2 flow program. The manual describes the governing model equations, solution procedure and their implementation in the computer program. FORCE2 is an extension of an existing B&V multidimensional, two-phase flow program. FORCE2 was developed for application to fluid beds by flow implementing a gas-solids modeling technology derived, in part, during a joint government -- industry research program, ``Erosion of FBC Heat Transfer Tubes,`` coordinated by Argonne National Laboratory. The development of FORCE2 was sponsored by ASEA-Babcock, an industry participant in this program. This manual is the principal documentation for the programmore » theory and organization. Program usage and post-processing of code predictions with the FORCE2 post-processor are described in a companion report, FORCE2 -- A Multidimensional Flow Program for Fluid Beds, User`s Guide. This manual is segmented into sections to facilitate its usage. In section 2.0, the mass and momentum conservation principles, the basis for the code, are presented. In section 3.0, the constitutive relations used in modeling gas-solids hydrodynamics are given. The finite-difference model equations are derived in section 4.0 and the solution procedures described in sections 5.0 and 6.0. Finally, the implementation of the model equations and solution procedure in FORCE2 is described in section 7.0.« less
NASA Technical Reports Server (NTRS)
Baker, T. C. (Principal Investigator)
1982-01-01
A general methodology is presented for estimating a stratum's at-harvest crop acreage proportion for a given crop year (target year) from the crop's estimated acreage proportion for sample segments from within the stratum. Sample segments from crop years other than the target year are (usually) required for use in conjunction with those from the target year. In addition, the stratum's (identifiable) crop acreage proportion may be estimated for times other than at-harvest in some situations. A by-product of the procedure is a methodology for estimating the change in the stratum's at-harvest crop acreage proportion from crop year to crop year. An implementation of the proposed procedure as a statistical analysis system routine using the system's matrix language module, PROC MATRIX, is described and documented. Three examples illustrating use of the methodology and algorithm are provided.
Pressure Oscillations and Structural Vibrations in Space Shuttle RSRM and ETM-3 Motors
NASA Technical Reports Server (NTRS)
Mason, D. R.; Morstadt, R. A.; Cannon, S. M.; Gross, E. G.; Nielsen, D. B.
2004-01-01
The complex interactions between internal motor pressure oscillations resulting from vortex shedding, the motor's internal acoustic modes, and the motor's structural vibration modes were assessed for the Space Shuttle four-segment booster Reusable Solid Rocket Motor and for the five-segment engineering test motor ETM-3. Two approaches were applied 1) a predictive procedure based on numerically solving modal representations of a solid rocket motor s acoustic equations of motion and 2) a computational fluid dynamics two-dimensional axi-symmetric large eddy simulation at discrete motor burn times.
Asymmetric bias in user guided segmentations of brain structures
NASA Astrophysics Data System (ADS)
Styner, Martin; Smith, Rachel G.; Graves, Michael M.; Mosconi, Matthew W.; Peterson, Sarah; White, Scott; Blocher, Joe; El-Sayed, Mohammed; Hazlett, Heather C.
2007-03-01
Brain morphometric studies often incorporate comparative asymmetry analyses of left and right hemispheric brain structures. In this work we show evidence that common methods of user guided structural segmentation exhibit strong left-right asymmetric biases and thus fundamentally influence any left-right asymmetry analyses. We studied several structural segmentation methods with varying degree of user interaction from pure manual outlining to nearly fully automatic procedures. The methods were applied to MR images and their corresponding left-right mirrored images from an adult and a pediatric study. Several expert raters performed the segmentations of all structures. The asymmetric segmentation bias is assessed by comparing the left-right volumetric asymmetry in the original and mirrored datasets, as well as by testing each sides volumetric differences to a zero mean standard t-tests. The structural segmentations of caudate, putamen, globus pallidus, amygdala and hippocampus showed a highly significant asymmetric bias using methods with considerable manual outlining or landmark placement. Only the lateral ventricle segmentation revealed no asymmetric bias due to the high degree of automation and a high intensity contrast on its boundary. Our segmentation methods have been adapted in that they are applied to only one of the hemispheres in an image and its left-right mirrored image. Our work suggests that existing studies of hemispheric asymmetry without similar precautions should be interpreted in a new, skeptical light. Evidence of an asymmetric segmentation bias is novel and unknown to the imaging community. This result seems less surprising to the visual perception community and its likely cause is differences in perception of oppositely curved 3D structures.
Lüddemann, Tobias; Egger, Jan
2016-01-01
Abstract. Among all types of cancer, gynecological malignancies belong to the fourth most frequent type of cancer among women. In addition to chemotherapy and external beam radiation, brachytherapy is the standard procedure for the treatment of these malignancies. In the progress of treatment planning, localization of the tumor as the target volume and adjacent organs of risks by segmentation is crucial to accomplish an optimal radiation distribution to the tumor while simultaneously preserving healthy tissue. Segmentation is performed manually and represents a time-consuming task in clinical daily routine. This study focuses on the segmentation of the rectum/sigmoid colon as an organ-at-risk in gynecological brachytherapy. The proposed segmentation method uses an interactive, graph-based segmentation scheme with a user-defined template. The scheme creates a directed two-dimensional graph, followed by the minimal cost closed set computation on the graph, resulting in an outlining of the rectum. The graph’s outline is dynamically adapted to the last calculated cut. Evaluation was performed by comparing manual segmentations of the rectum/sigmoid colon to results achieved with the proposed method. The comparison of the algorithmic to manual result yielded a dice similarity coefficient value of 83.85±4.08, in comparison to 83.97±8.08% for the comparison of two manual segmentations by the same physician. Utilizing the proposed methodology resulted in a median time of 128 s/dataset, compared to 300 s needed for pure manual segmentation. PMID:27403448
Ruofeng, Yin; Cohen, Jeremiah R; Buser, Zorica; Yoon, S Tim; Meisel, Hans-Joerg; Youssef, Jim A; Park, Jong-Beom; Wang, Jeffrey C; Brodke, Darrel S
2016-08-01
Retrospective study. Symptomatic scoliosis can be a source of severe pain and disability. When nonoperative treatments fail, spine fusion is considered as an effective procedure in scoliosis management. The purpose of this study was to evaluate the trends of patients with scoliosis undergoing posterior long segment fusion (PLSF) with and without recombinant human bone morphogenetic protein 2 (rhBMP-2). Patients within the orthopedic subset of Medicare database undergoing PLSF from 2005 to 2011 were identified using the PearlDiver Patient Records Database. Both diagnosis and procedural International Classification of Diseases, ninth edition and Current Procedural Terminology codes were used. The year of procedure, age, sex, region, and rhBMP-2 use were recorded. In total, 1,265,591 patients with scoliosis were identified with 29,787 PLSF surgeries between 2005 and 2011. The incidence of PLSF procedures increased gradually from 2005 to 2009, decreased in 2010 (p < 0 0.01), and grew again in 2011. Patients over age 84 years had the highest incidence of PLSF. The lowest incidence of the procedures was in the Northeast, 5.96 per 100,000 patients. Sex differences were observed with a male-to-female ratio of 0.40 (p < 0.01). The use of rhBMP-2 for PLSF increased steadily from 2005 to 2009; the numbers dropped dramatically in 2010 and returned by 2011. According to our study, patients with scoliosis demonstrated a 0.6575 average incidence increase of PLSF treatments annually. There were significant differences in incidence of PLSF procedure and patient demographics. Additionally, rhBMP-2 consumption significantly changed when we stratified it by sex, age, and region respectively.
Lu, Chao; Chelikani, Sudhakar; Papademetris, Xenophon; Knisely, Jonathan P.; Milosevic, Michael F.; Chen, Zhe; Jaffray, David A.; Staib, Lawrence H.; Duncan, James S.
2011-01-01
External beam radiotherapy (EBRT) has become the preferred options for non-surgical treatment of prostate cancer and cervix cancer. In order to deliver higher doses to cancerous regions within these pelvic structures (i.e. prostate or cervix) while maintaining or lowering the doses to surrounding non-cancerous regions, it is critical to account for setup variation, organ motion, anatomical changes due to treatment and intra-fraction motion. In previous work, manual segmentation of the soft tissues is performed and then images are registered based on the manual segmentation. In this paper, we present an integrated automatic approach to multiple organ segmentation and nonrigid constrained registration, which can achieve these two aims simultaneously. The segmentation and registration steps are both formulated using a Bayesian framework, and they constrain each other using an iterative conditional model strategy. We also propose a new strategy to assess cumulative actual dose for this novel integrated algorithm, in order to both determine whether the intended treatment is being delivered and, potentially, whether or not a plan should be adjusted for future treatment fractions. Quantitative results show that the automatic segmentation produced results that have an accuracy comparable to manual segmentation, while the registration part significantly outperforms both rigid and non-rigid registration. Clinical application and evaluation of dose delivery show the superiority of proposed method to the procedure currently used in clinical practice, i.e. manual segmentation followed by rigid registration. PMID:21646038
Padma, A; Sukanesh, R
2013-01-01
A computer software system is designed for the segmentation and classification of benign from malignant tumour slices in brain computed tomography (CT) images. This paper presents a method to find and select both the dominant run length and co-occurrence texture features of region of interest (ROI) of the tumour region of each slice to be segmented by Fuzzy c means clustering (FCM) and evaluate the performance of support vector machine (SVM)-based classifiers in classifying benign and malignant tumour slices. Two hundred and six tumour confirmed CT slices are considered in this study. A total of 17 texture features are extracted by a feature extraction procedure, and six features are selected using Principal Component Analysis (PCA). This study constructed the SVM-based classifier with the selected features and by comparing the segmentation results with the experienced radiologist labelled ground truth (target). Quantitative analysis between ground truth and segmented tumour is presented in terms of segmentation accuracy, segmentation error and overlap similarity measures such as the Jaccard index. The classification performance of the SVM-based classifier with the same selected features is also evaluated using a 10-fold cross-validation method. The proposed system provides some newly found texture features have an important contribution in classifying benign and malignant tumour slices efficiently and accurately with less computational time. The experimental results showed that the proposed system is able to achieve the highest segmentation and classification accuracy effectiveness as measured by jaccard index and sensitivity and specificity.
Brain tumor segmentation based on local independent projection-based classification.
Huang, Meiyan; Yang, Wei; Wu, Yao; Jiang, Jun; Chen, Wufan; Feng, Qianjin
2014-10-01
Brain tumor segmentation is an important procedure for early tumor diagnosis and radiotherapy planning. Although numerous brain tumor segmentation methods have been presented, enhancing tumor segmentation methods is still challenging because brain tumor MRI images exhibit complex characteristics, such as high diversity in tumor appearance and ambiguous tumor boundaries. To address this problem, we propose a novel automatic tumor segmentation method for MRI images. This method treats tumor segmentation as a classification problem. Additionally, the local independent projection-based classification (LIPC) method is used to classify each voxel into different classes. A novel classification framework is derived by introducing the local independent projection into the classical classification model. Locality is important in the calculation of local independent projections for LIPC. Locality is also considered in determining whether local anchor embedding is more applicable in solving linear projection weights compared with other coding methods. Moreover, LIPC considers the data distribution of different classes by learning a softmax regression model, which can further improve classification performance. In this study, 80 brain tumor MRI images with ground truth data are used as training data and 40 images without ground truth data are used as testing data. The segmentation results of testing data are evaluated by an online evaluation tool. The average dice similarities of the proposed method for segmenting complete tumor, tumor core, and contrast-enhancing tumor on real patient data are 0.84, 0.685, and 0.585, respectively. These results are comparable to other state-of-the-art methods.
Sato, Masaaki; Murayama, Tomonori; Nakajima, Jun
2018-04-01
Thoracoscopic segmentectomy for the posterior basal segment (S10) and its variant (e.g., S9+10 and S10b+c combined subsegmentectomy) is one of the most challenging anatomical segmentectomies. Stapler-based segmentectomy is attractive to simplify the operation and to prevent post-operative air leakage. However, this approach makes thoracoscopic S10 segmentectomy even more tricky. The challenges are caused mostly from the following three reasons: first, similar to other basal segments, "three-dimensional" stapling is needed to fold a cuboidal segment; second, the belonging pulmonary artery is not directly facing the interlobar fissure or the hilum, making identification of target artery difficult; third, the anatomy of S10 and adjacent segments such as superior (S6) and medial basal (S7) is variable. To overcome these challenges, this article summarizes the "bidirectional approach" that allows for solid confirmation of anatomy while avoiding separation of S6 and the basal segment. To assist this approach under limited thoracoscopic view, we also show stapling techniques to fold the cuboidal segment with the aid of "standing stiches". Attention should also be paid to the anatomy of adjacent segments particularly that of S7, which tends to be congested after stapling. The use of virtual-assisted lung mapping (VAL-MAP) is also recommended to demark resection lines because it flexibly allows for complex procedures such as combined subsegmentectomy such as S10b+c, extended segmentectomy such as S10+S9b, and non-anatomically extended segmentectomy.
Ohta, Hideki; Matsumoto, Yoshiyuki; Morishita, Yuichirou; Sakai, Tsubasa; Huang, George; Kida, Hirotaka; Takemitsu, Yoshiharu
2011-01-01
Background When spinal fusion is applied to degenerative lumbar spinal disease with instability, adjacent segment disorder will be an issue in the future. However, decompression alone could cause recurrence of spinal canal stenosis because of increased instability on operated segments and lead to revision surgery. Covering the disadvantages of both procedures, we applied nonfusion stabilization with the Segmental Spinal Correction System (Ulrich Medical, Ulm, Germany) and decompression. Methods The surgical results of 52 patients (35 men and 17 women) with a minimum 2-year follow-up were analyzed: 10 patients with lumbar spinal canal stenosis, 15 with lumbar canal stenosis with disc herniation, 20 with degenerative spondylolisthesis, 6 with disc herniation, and 1 with lumbar discopathy. Results The Japanese Orthopaedic Association score was improved, from 14.4 ± 5.3 to 25.5 ± 2.8. The improvement rate was 76%. Range of motion of the operated segments was significantly decreased, from 9.6° ± 4.2° to 2.0° ± 1.8°. Only 1 patient had adjacent segment disease that required revision surgery. There was only 1 screw breakage, but the patient was asymptomatic. Conclusions Over a minimum 2-year follow-up, the results of nonfusion stabilization with the Segmental Spinal Correction System for unstable degenerative lumbar disease were good. It is necessary to follow up the cases with a focus on adjacent segment disorders in the future. PMID:25802671
Freehand three-dimensional ultrasound imaging of carotid artery using motion tracking technology.
Chung, Shao-Wen; Shih, Cho-Chiang; Huang, Chih-Chung
2017-02-01
Ultrasound imaging has been extensively used for determining the severity of carotid atherosclerotic stenosis. In particular, the morphological characterization of carotid plaques can be performed for risk stratification of patients. However, using 2D ultrasound imaging for detecting morphological changes in plaques has several limitations. Due to the scan was performed on a single longitudinal cross-section, the selected 2D image is difficult to represent the entire morphology and volume of plaque and vessel lumen. In addition, the precise positions of 2D ultrasound images highly depend on the radiologists' experience, it makes the serial long-term exams of anti-atherosclerotic therapies are difficult to relocate the same corresponding planes by using 2D B-mode images. This has led to the recent development of three-dimensional (3D) ultrasound imaging, which offers improved visualization and quantification of complex morphologies of carotid plaques. In the present study, a freehand 3D ultrasound imaging technique based on optical motion tracking technology is proposed. Unlike other optical tracking systems, the marker is a small rigid body that is attached to the ultrasound probe and is tracked by eight high-performance digital cameras. The probe positions in 3D space coordinates are then calibrated at spatial and temporal resolutions of 10μm and 0.01s, respectively. The image segmentation procedure involves Otsu's and the active contour model algorithms and accurately detects the contours of the carotid arteries. The proposed imaging technique was verified using normal artery and atherosclerotic stenosis phantoms. Human experiments involving freehand scanning of the carotid artery of a volunteer were also performed. The results indicated that compared with manual segmentation, the lowest percentage errors of the proposed segmentation procedure were 7.8% and 9.1% for the external and internal carotid arteries, respectively. Finally, the effect of handshaking was calibrated using the optical tracking system for reconstructing a 3D image. Copyright © 2016 Elsevier B.V. All rights reserved.
Image segmentation and 3D visualization for MRI mammography
NASA Astrophysics Data System (ADS)
Li, Lihua; Chu, Yong; Salem, Angela F.; Clark, Robert A.
2002-05-01
MRI mammography has a number of advantages, including the tomographic, and therefore three-dimensional (3-D) nature, of the images. It allows the application of MRI mammography to breasts with dense tissue, post operative scarring, and silicon implants. However, due to the vast quantity of images and subtlety of difference in MR sequence, there is a need for reliable computer diagnosis to reduce the radiologist's workload. The purpose of this work was to develop automatic breast/tissue segmentation and visualization algorithms to aid physicians in detecting and observing abnormalities in breast. Two segmentation algorithms were developed: one for breast segmentation, the other for glandular tissue segmentation. In breast segmentation, the MRI image is first segmented using an adaptive growing clustering method. Two tracing algorithms were then developed to refine the breast air and chest wall boundaries of breast. The glandular tissue segmentation was performed using an adaptive thresholding method, in which the threshold value was spatially adaptive using a sliding window. The 3D visualization of the segmented 2D slices of MRI mammography was implemented under IDL environment. The breast and glandular tissue rendering, slicing and animation were displayed.
Consensus for the Treatment of Varicose Vein with Radiofrequency Ablation
Joh, Jin Hyun; Kim, Woo-Shik; Jung, In Mok; Park, Ki-Hyuk; Lee, Taeseung; Kang, Jin Mo
2014-01-01
The objective of this paper is to introduce the schematic protocol of radiofrequency (RF) ablation for the treatment of varicose veins. Indication: anatomic or pathophysiologic indication includes venous diameter within 2–20 mm, reflux time ≥0.5 seconds and distance from the skin ≥5 mm or subfascial location. Access: it is recommended to access at or above the knee joint for great saphenous vein and above the mid-calf for small saphenous vein. Catheter placement: the catheter tip should be placed 2.0 cm inferior to the saphenofemoral or saphenopopliteal junction. Endovenous heat-induced thrombosis ≥class III should be treated with low-molecular weight heparin. Tumescent solution: the composition of solution can be variable (e.g., 2% lidocaine 20 mL+500 mL normal saline+bicarbonate 2.5 mL with/without epinephrine). Infiltration can be done from each direction. Ablation: two cycles’ ablation for the first proximal segment of saphenous vein and the segment with the incompetent perforators is recommended. The other segments should be ablated one time. During RF energy delivery, it is recommended to apply external compression. Concomitant procedure: It is recommended to do simultaneously ambulatory phlebectomy. For sclerotherapy, it is recommended to defer at least 2 weeks. Post-procedural management: post-procedural ambulation is encouraged to reduce the thrombotic complications. Compression stocking should be applied for at least 7 days. Minor daily activity is not limited, but strenuous activities should be avoided for 2 weeks. It is suggested to take showers after 24 hours and tub baths, swimming, or soaking in water after 2 weeks. PMID:26217628
Sodium Heat Pipe Module Processing For the SAFE-100 Reactor Concept
NASA Technical Reports Server (NTRS)
Martin, James; Salvail, Pat
2003-01-01
To support development and hardware-based testing of various space reactor concepts, the Early Flight Fission-Test Facility (EFF-TF) team established a specialized glove box unit with ancillary systems to handle/process alkali metals. Recently, these systems have been commissioned with sodium supporting the fill of stainless steel heat pipe modules for use with a 100 kW thermal heat pipe reactor design. As part of this effort, procedures were developed and refined to govern each segment of the process covering: fill, leak check, vacuum processing, weld closeout, and final "wet in". A series of 316 stainless steel modules, used as precursors to the actual 321 stainless steel modules, were filled with 35 +/- 1 grams of sodium using a known volume canister to control the dispensed mass. Each module was leak checked to less than10(exp -10) std cc/sec helium and vacuum conditioned at 250 C to assist in the removal of trapped gases. A welding procedure was developed to close out the fill stem preventing external gases from entering the evacuated module. Finally the completed modules were vacuum fired at 750 C allowing the sodium to fully wet the internal surface and wick structure of the heat pipe module.
Sodium Heat Pipe Module Processing For the SAFE-100 Reactor Concept
NASA Astrophysics Data System (ADS)
Martin, James; Salvail, Pat
2004-02-01
To support development and hardware-based testing of various space reactor concepts, the Early Flight Fission-Test Facility (EFF-TF) team established a specialized glove box unit with ancillary systems to handle/process alkali metals. Recently, these systems have been commissioned with sodium supporting the fill of stainless steel heat pipe modules for use with a 100 kW thermal heat pipe reactor design. As part of this effort, procedures were developed and refined to govern each segment of the process covering: fill, leak check, vacuum processing, weld closeout, and final ``wet in''. A series of 316 stainless steel modules, used as precursors to the actual 321 stainless steel modules, were filled with 35 +/-1 grams of sodium using a known volume canister to control the dispensed mass. Each module was leak checked to <10-10 std cc/sec helium and vacuum conditioned at 250 °C to assist in the removal of trapped gases. A welding procedure was developed to close out the fill stem preventing external gases from entering the evacuated module. Finally the completed modules were vacuum fired at 750 °C allowing the sodium to fully wet the internal surface and wick structure of the heat pipe module.
Behavioral and biological interactions with small groups in confined microsocieties
NASA Technical Reports Server (NTRS)
Brady, Joseph V.
1986-01-01
Research on small group performance in confined microsocieties was focused upon the development of principles and procedures relevant to the selection and training of space mission personnel, upon the investigation of behavioral programming, preventive monitoring and corrective procedures to enhance space mission performance effectiveness, and upon the evaluation of behavioral and physiological countermeasures to the potentially disruptive effects of unfamiliar and stressful environments. An experimental microsociety environment was designed and developed for continuous residence of human volunteers over extended time periods. Studies were then undertaken to analyze experimentally: (1) conditions that sustain group cohesion and productivity and that prevent social fragmentation and performance deterioration, (2) motivational effects performance requirements, and (3) behavioral and physiological effects resulting from changes in group size and composition. The results show that both individual and group productivity can be enhanced under such conditions by the direct application of contingency management principles to designated high-value tasks. Similarly, group cohesiveness can be promoted and individual social isolation and/or alienation prevented by the application of contingency management principles to social interaction segments of the program.
Design of an x-ray telescope optics for XEUS
NASA Astrophysics Data System (ADS)
Graue, Roland; Kampf, Dirk; Wallace, Kotska; Lumb, David; Bavdaz, Marcos; Freyberg, Michael
2017-11-01
The X-ray telescope concept for XEUS is based on an innovative high performance and light weight Silicon Pore Optics technology. The XEUS telescope is segmented into 16 radial, thermostable petals providing the rigid optical bench structure of the stand alone XRay High Precision Tandem Optics. A fully representative Form Fit Function (FFF) Model of one petal is currently under development to demonstrate the outstanding lightweight telescope capabilities with high optically effective area. Starting from the envisaged system performance the related tolerance budgets were derived. These petals are made from ceramics, i.e. CeSiC. The structural and thermal performance of the petal shall be reported. The stepwise alignment and integration procedure on petal level shall be described. The functional performance and environmental test verification plan of the Form Fit Function Model and the test set ups are described in this paper. In parallel to the running development activities the programmatic and technical issues wrt. the FM telescope MAIT with currently 1488 Tandem Optics are under investigation. Remote controlled robot supported assembly, simultaneous active alignment and verification testing and decentralised time effective integration procedures shall be illustrated.
NASA Astrophysics Data System (ADS)
Li, Mengmeng; Bijker, Wietske; Stein, Alfred
2015-04-01
Two main challenges are faced when classifying urban land cover from very high resolution satellite images: obtaining an optimal image segmentation and distinguishing buildings from other man-made objects. For optimal segmentation, this work proposes a hierarchical representation of an image by means of a Binary Partition Tree (BPT) and an unsupervised evaluation of image segmentations by energy minimization. For building extraction, we apply fuzzy sets to create a fuzzy landscape of shadows which in turn involves a two-step procedure. The first step is a preliminarily image classification at a fine segmentation level to generate vegetation and shadow information. The second step models the directional relationship between building and shadow objects to extract building information at the optimal segmentation level. We conducted the experiments on two datasets of Pléiades images from Wuhan City, China. To demonstrate its performance, the proposed classification is compared at the optimal segmentation level with Maximum Likelihood Classification and Support Vector Machine classification. The results show that the proposed classification produced the highest overall accuracies and kappa coefficients, and the smallest over-classification and under-classification geometric errors. We conclude first that integrating BPT with energy minimization offers an effective means for image segmentation. Second, we conclude that the directional relationship between building and shadow objects represented by a fuzzy landscape is important for building extraction.
NASA Astrophysics Data System (ADS)
Bassier, M.; Bonduel, M.; Van Genechten, B.; Vergauwen, M.
2017-11-01
Point cloud segmentation is a crucial step in scene understanding and interpretation. The goal is to decompose the initial data into sets of workable clusters with similar properties. Additionally, it is a key aspect in the automated procedure from point cloud data to BIM. Current approaches typically only segment a single type of primitive such as planes or cylinders. Also, current algorithms suffer from oversegmenting the data and are often sensor or scene dependent. In this work, a method is presented to automatically segment large unstructured point clouds of buildings. More specifically, the segmentation is formulated as a graph optimisation problem. First, the data is oversegmented with a greedy octree-based region growing method. The growing is conditioned on the segmentation of planes as well as smooth surfaces. Next, the candidate clusters are represented by a Conditional Random Field after which the most likely configuration of candidate clusters is computed given a set of local and contextual features. The experiments prove that the used method is a fast and reliable framework for unstructured point cloud segmentation. Processing speeds up to 40,000 points per second are recorded for the region growing. Additionally, the recall and precision of the graph clustering is approximately 80%. Overall, nearly 22% of oversegmentation is reduced by clustering the data. These clusters will be classified and used as a basis for the reconstruction of BIM models.
Hu, Yu-Chi J; Grossberg, Michael D; Mageras, Gikas S
2008-01-01
Planning radiotherapy and surgical procedures usually require onerous manual segmentation of anatomical structures from medical images. In this paper we present a semi-automatic and accurate segmentation method to dramatically reduce the time and effort required of expert users. This is accomplished by giving a user an intuitive graphical interface to indicate samples of target and non-target tissue by loosely drawing a few brush strokes on the image. We use these brush strokes to provide the statistical input for a Conditional Random Field (CRF) based segmentation. Since we extract purely statistical information from the user input, we eliminate the need of assumptions on boundary contrast previously used by many other methods, A new feature of our method is that the statistics on one image can be reused on related images without registration. To demonstrate this, we show that boundary statistics provided on a few 2D slices of volumetric medical data, can be propagated through the entire 3D stack of images without using the geometric correspondence between images. In addition, the image segmentation from the CRF can be formulated as a minimum s-t graph cut problem which has a solution that is both globally optimal and fast. The combination of a fast segmentation and minimal user input that is reusable, make this a powerful technique for the segmentation of medical images.
Antunes, Sofia; Esposito, Antonio; Palmisano, Anna; Colantoni, Caterina; Cerutti, Sergio; Rizzo, Giovanna
2016-05-01
Extraction of the cardiac surfaces of interest from multi-detector computed tomographic (MDCT) data is a pre-requisite step for cardiac analysis, as well as for image guidance procedures. Most of the existing methods need manual corrections, which is time-consuming. We present a fully automatic segmentation technique for the extraction of the right ventricle, left ventricular endocardium and epicardium from MDCT images. The method consists in a 3D level set surface evolution approach coupled to a new stopping function based on a multiscale directional second derivative Gaussian filter, which is able to stop propagation precisely on the real boundary of the structures of interest. We validated the segmentation method on 18 MDCT volumes from healthy and pathologic subjects using manual segmentation performed by a team of expert radiologists as gold standard. Segmentation errors were assessed for each structure resulting in a surface-to-surface mean error below 0.5 mm and a percentage of surface distance with errors less than 1 mm above 80%. Moreover, in comparison to other segmentation approaches, already proposed in previous work, our method presented an improved accuracy (with surface distance errors less than 1 mm increased of 8-20% for all structures). The obtained results suggest that our approach is accurate and effective for the segmentation of ventricular cavities and myocardium from MDCT images.
Meckelin 3 Is Necessary for Photoreceptor Outer Segment Development in Rat Meckel Syndrome
Tiwari, Sarika; Hudson, Scott; Gattone, Vincent H.; Miller, Caroline; Chernoff, Ellen A. G.; Belecky-Adams, Teri L.
2013-01-01
Ciliopathies lead to multiorgan pathologies that include renal cysts, deafness, obesity and retinal degeneration. Retinal photoreceptors have connecting cilia joining the inner and outer segment that are responsible for transport of molecules to develop and maintain the outer segment process. The present study evaluated meckelin (MKS3) expression during outer segment genesis and determined the consequences of mutant meckelin on photoreceptor development and survival in Wistar polycystic kidney disease Wpk/Wpk rat using immunohistochemistry, analysis of cell death and electron microscopy. MKS3 was ubiquitously expressed throughout the retina at postnatal day 10 (P10) and P21. However, in the mature retina, MKS3 expression was restricted to photoreceptors and the retinal ganglion cell layer. At P10, both the wild type and homozygous Wpk mutant retina had all retinal cell types. In contrast, by P21, cells expressing rod- and cone-specific markers were fewer in number and expression of opsins appeared to be abnormally localized to the cell body. Cell death analyses were consistent with the disappearance of photoreceptor-specific markers and showed that the cells were undergoing caspase-dependent cell death. By electron microscopy, P10 photoreceptors showed rudimentary outer segments with an axoneme, but did not develop outer segment discs that were clearly present in the wild type counterpart. At p21 the mutant outer segments appeared much the same as the P10 mutant outer segments with only a short axoneme, while the wild-type controls had developed outer segments with many well-organized discs. We conclude that MKS3 is not important for formation of connecting cilium and rudimentary outer segments, but is critical for the maturation of outer segment processes. PMID:23516626
49 CFR 195.230 - Welds: Repair or removal of defects.
Code of Federal Regulations, 2012 CFR
2012-10-01
... adversely affect the quality of the weld repair. After repair, the segment of the weld that was repaired... welding procedure used to make the original weld are met upon completion of the final weld repair. [Amdt...
49 CFR 195.230 - Welds: Repair or removal of defects.
Code of Federal Regulations, 2014 CFR
2014-10-01
... adversely affect the quality of the weld repair. After repair, the segment of the weld that was repaired... welding procedure used to make the original weld are met upon completion of the final weld repair. [Amdt...
49 CFR 195.230 - Welds: Repair or removal of defects.
Code of Federal Regulations, 2011 CFR
2011-10-01
... adversely affect the quality of the weld repair. After repair, the segment of the weld that was repaired... welding procedure used to make the original weld are met upon completion of the final weld repair. [Amdt...
49 CFR 195.230 - Welds: Repair or removal of defects.
Code of Federal Regulations, 2013 CFR
2013-10-01
... adversely affect the quality of the weld repair. After repair, the segment of the weld that was repaired... welding procedure used to make the original weld are met upon completion of the final weld repair. [Amdt...
49 CFR 195.230 - Welds: Repair or removal of defects.
Code of Federal Regulations, 2010 CFR
2010-10-01
... adversely affect the quality of the weld repair. After repair, the segment of the weld that was repaired... welding procedure used to make the original weld are met upon completion of the final weld repair. [Amdt...
Segmentation, modeling and classification of the compact objects in a pile
NASA Technical Reports Server (NTRS)
Gupta, Alok; Funka-Lea, Gareth; Wohn, Kwangyoen
1990-01-01
The problem of interpreting dense range images obtained from the scene of a heap of man-made objects is discussed. A range image interpretation system consisting of segmentation, modeling, verification, and classification procedures is described. First, the range image is segmented into regions and reasoning is done about the physical support of these regions. Second, for each region several possible three-dimensional interpretations are made based on various scenarios of the objects physical support. Finally each interpretation is tested against the data for its consistency. The superquadric model is selected as the three-dimensional shape descriptor, plus tapering deformations along the major axis. Experimental results obtained from some complex range images of mail pieces are reported to demonstrate the soundness and the robustness of our approach.
Classification of microscopy images of Langerhans islets
NASA Astrophysics Data System (ADS)
Å vihlík, Jan; Kybic, Jan; Habart, David; Berková, Zuzana; Girman, Peter; Kříž, Jan; Zacharovová, Klára
2014-03-01
Evaluation of images of Langerhans islets is a crucial procedure for planning an islet transplantation, which is a promising diabetes treatment. This paper deals with segmentation of microscopy images of Langerhans islets and evaluation of islet parameters such as area, diameter, or volume (IE). For all the available images, the ground truth and the islet parameters were independently evaluated by four medical experts. We use a pixelwise linear classifier (perceptron algorithm) and SVM (support vector machine) for image segmentation. The volume is estimated based on circle or ellipse fitting to individual islets. The segmentations were compared with the corresponding ground truth. Quantitative islet parameters were also evaluated and compared with parameters given by medical experts. We can conclude that accuracy of the presented fully automatic algorithm is fully comparable with medical experts.
Tiley, J S; Viswanathan, G B; Shiveley, A; Tschopp, M; Srinivasan, R; Banerjee, R; Fraser, H L
2010-08-01
Precipitates of the ordered L1(2) gamma' phase (dispersed in the face-centered cubic or FCC gamma matrix) were imaged in Rene 88 DT, a commercial multicomponent Ni-based superalloy, using energy-filtered transmission electron microscopy (EFTEM). Imaging was performed using the Cr, Co, Ni, Ti and Al elemental L-absorption edges in the energy loss spectrum. Manual and automated segmentation procedures were utilized for identification of precipitate boundaries and measurement of precipitate sizes. The automated region growing technique for precipitate identification in images was determined to measure accurately precipitate diameters. In addition, the region growing technique provided a repeatable method for optimizing segmentation techniques for varying EFTEM conditions. (c) 2010 Elsevier Ltd. All rights reserved.
A software tool for automatic classification and segmentation of 2D/3D medical images
NASA Astrophysics Data System (ADS)
Strzelecki, Michal; Szczypinski, Piotr; Materka, Andrzej; Klepaczko, Artur
2013-02-01
Modern medical diagnosis utilizes techniques of visualization of human internal organs (CT, MRI) or of its metabolism (PET). However, evaluation of acquired images made by human experts is usually subjective and qualitative only. Quantitative analysis of MR data, including tissue classification and segmentation, is necessary to perform e.g. attenuation compensation, motion detection, and correction of partial volume effect in PET images, acquired with PET/MR scanners. This article presents briefly a MaZda software package, which supports 2D and 3D medical image analysis aiming at quantification of image texture. MaZda implements procedures for evaluation, selection and extraction of highly discriminative texture attributes combined with various classification, visualization and segmentation tools. Examples of MaZda application in medical studies are also provided.
Speech segmentation in aphasia
Peñaloza, Claudia; Benetello, Annalisa; Tuomiranta, Leena; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria Carmen; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni
2017-01-01
Background Speech segmentation is one of the initial and mandatory phases of language learning. Although some people with aphasia have shown a preserved ability to learn novel words, their speech segmentation abilities have not been explored. Aims We examined the ability of individuals with chronic aphasia to segment words from running speech via statistical learning. We also explored the relationships between speech segmentation and aphasia severity, and short-term memory capacity. We further examined the role of lesion location in speech segmentation and short-term memory performance. Methods & Procedures The experimental task was first validated with a group of young adults (n = 120). Participants with chronic aphasia (n = 14) were exposed to an artificial language and were evaluated in their ability to segment words using a speech segmentation test. Their performance was contrasted against chance level and compared to that of a group of elderly matched controls (n = 14) using group and case-by-case analyses. Outcomes & Results As a group, participants with aphasia were significantly above chance level in their ability to segment words from the novel language and did not significantly differ from the group of elderly controls. Speech segmentation ability in the aphasic participants was not associated with aphasia severity although it significantly correlated with word pointing span, a measure of verbal short-term memory. Case-by-case analyses identified four individuals with aphasia who performed above chance level on the speech segmentation task, all with predominantly posterior lesions and mild fluent aphasia. Their short-term memory capacity was also better preserved than in the rest of the group. Conclusions Our findings indicate that speech segmentation via statistical learning can remain functional in people with chronic aphasia and suggest that this initial language learning mechanism is associated with the functionality of the verbal short-term memory system and the integrity of the left inferior frontal region. PMID:28824218
A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.
Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle
2016-03-08
On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual con-tours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (< 1 ms) with a satisfying accuracy (Dice = 0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on-board MR-IGRT system.