Sample records for xmr imaging-versus computed

  1. A system for the registration of arthroscopic images to magnetic resonance images of the knee: for improved virtual knee arthroscopy

    NASA Astrophysics Data System (ADS)

    Hu, Chengliang; Amati, Giancarlo; Gullick, Nicola; Oakley, Stephen; Hurmusiadis, Vassilios; Schaeffter, Tobias; Penney, Graeme; Rhode, Kawal

    2009-02-01

    Knee arthroscopy is a minimally invasive procedure that is routinely carried out for the diagnosis and treatment of pathologies of the knee joint. A high level of expertise is required to carry out this procedure and therefore the clinical training is extensive. There are several reasons for this that include the small field of view seen by the arthroscope and the high degree of distortion in the video images. Several virtual arthroscopy simulators have been proposed to augment the learning process. One of the limitations of these simulators is the generic models that are used. We propose to develop a new virtual arthroscopy simulator that will allow the use of pathology-specific models with an increased level of photo-realism. In order to generate these models we propose to use registered magnetic resonance images (MRI) and arthroscopic video images collected from patients with a variety of knee pathologies. We present a method to perform this registration based on the use of a combined X-ray and MR imaging system (XMR). In order to validate our technique we carried out MR imaging and arthroscopy of a custom-made acrylic phantom in the XMR environment. The registration between the two modalities was computed using a combination of XMR and camera calibration, and optical tracking. Both two-dimensional (2D) and three-dimensional (3D) registration errors were computed and shown to be approximately 0.8 and 3 mm, respectively. Further to this, we qualitatively tested our approach using a more realistic plastic knee model that is used for the arthroscopy training.

  2. Magnetic resonance imaging versus computed tomography to plan hemilaminectomies in chondrodystrophic dogs with intervertebral disc extrusion.

    PubMed

    Noyes, Julie A; Thomovsky, Stephanie A; Chen, Annie V; Owen, Tina J; Fransson, Boel A; Carbonneau, Kira J; Matthew, Susan M

    2017-10-01

    To determine the influence of preoperative computed tomography (CT) versus magnetic resonance (MR) on hemilaminectomies planned to treat thoracolumbar (TL) intervertebral disc (IVD) extrusions in chondrodystrophic dogs. Prospective clinical study. Forty chondrodystrophic dogs with TL IVD extrusion and preoperative CT and MR studies. MR and CT images were randomized and reviewed by 4 observers masked to the dog's identity and corresponding imaging studies. Observers planned the location along the spine, side, and extent (number of articular facets to be removed) based on individual reviews of CT and MR studies. Intra-observer agreement was determined between overall surgical plan, location, side, and size of the hemilaminectomy planned on CT versus MR of the same dog. Similar surgical plans were developed based on MR versus CT in 43.5%-66.6% of dogs, depending on the observer. Intra-observer agreement in location, side, and size of the planned hemilaminectomy based on CT versus MR ranged between 48.7%-66.6%, 87%-92%, and 51.2%-71.7% of dogs, respectively. Observers tended to plan larger laminectomy defects based on MR versus CT of the same dog. Findings from this study indicated considerable differences in hemilaminectomies planned on preoperative MR versus CT imaging. Surgical location and size varied the most; the side of planned hemilaminectomies was most consistent between imaging modalities. © 2017 The American College of Veterinary Surgeons.

  3. Imaging of the hip joint. Computed tomography versus magnetic resonance imaging

    NASA Technical Reports Server (NTRS)

    Lang, P.; Genant, H. K.; Jergesen, H. E.; Murray, W. R.

    1992-01-01

    The authors reviewed the applications and limitations of computed tomography (CT) and magnetic resonance (MR) imaging in the assessment of the most common hip disorders. Magnetic resonance imaging is the most sensitive technique in detecting osteonecrosis of the femoral head. Magnetic resonance reflects the histologic changes associated with osteonecrosis very well, which may ultimately help to improve staging. Computed tomography can more accurately identify subchondral fractures than MR imaging and thus remains important for staging. In congenital dysplasia of the hip, the position of the nonossified femoral head in children less than six months of age can only be inferred by indirect signs on CT. Magnetic resonance imaging demonstrates the cartilaginous femoral head directly without ionizing radiation. Computed tomography remains the imaging modality of choice for evaluating fractures of the hip joint. In some patients, MR imaging demonstrates the fracture even when it is not apparent on radiography. In neoplasm, CT provides better assessment of calcification, ossification, and periosteal reaction than MR imaging. Magnetic resonance imaging, however, represents the most accurate imaging modality for evaluating intramedullary and soft-tissue extent of the tumor and identifying involvement of neurovascular bundles. Magnetic resonance imaging can also be used to monitor response to chemotherapy. In osteoarthrosis and rheumatoid arthritis of the hip, both CT and MR provide more detailed assessment of the severity of disease than conventional radiography because of their tomographic nature. Magnetic resonance imaging is unique in evaluating cartilage degeneration and loss, and in demonstrating soft-tissue alterations such as inflammatory synovial proliferation.

  4. Automated, computer-guided PASI measurements by digital image analysis versus conventional physicians' PASI calculations: study protocol for a comparative, single-centre, observational study.

    PubMed

    Fink, Christine; Uhlmann, Lorenz; Klose, Christina; Haenssle, Holger A

    2018-05-17

    Reliable and accurate assessment of severity in psoriasis is very important in order to meet indication criteria for initiation of systemic treatment or to evaluate treatment efficacy. The most acknowledged tool for measuring the extent of psoriatic skin changes is the Psoriasis Area and Severity Index (PASI). However, the calculation of PASI can be tedious and subjective and high intraobserver and interobserver variability is an important concern. Therefore, there is a great need for a standardised and objective method that guarantees a reproducible PASI calculation. Within this study we will investigate the precision and reproducibility of automated, computer-guided PASI measurements in comparison to trained physicians to address these limitations. Non-interventional analyses of PASI calculations by either physicians in a prospective versus retrospective setting or an automated computer-guided algorithm in 120 patients with plaque psoriasis. All retrospective PASI calculations by physicians or by the computer algorithm are based on total body digital images. The primary objective of this study is comparison of automated computer-guided PASI measurements by means of digital image analysis versus conventional, prospective or retrospective physicians' PASI assessments. Secondary endpoints include (1) the assessment of physicians' interobserver variance in PASI calculations, (2) the assessment of physicians' intraobserver variance in PASI assessments of the same patients' images after a time interval of at least 4 weeks, (3) the assessment of the deviation between physicians' prospective versus retrospective PASI calculations, and (4) the reproducibility of automated computer-guided PASI measurements by assessment of two sets of total body digital images of the same patients taken at one time point. Ethical approval was provided by the Ethics Committee of the Medical Faculty of the University of Heidelberg (ethics approval number S-379/2016). DRKS00011818; Results. Â

  5. Comparison of 640-Slice Multidetector Computed Tomography Versus 32-Slice MDCT for Imaging of the Osteo-odonto-keratoprosthesis Lamina.

    PubMed

    Norris, Joseph M; Kishikova, Lyudmila; Avadhanam, Venkata S; Koumellis, Panos; Francis, Ian S; Liu, Christopher S C

    2015-08-01

    To investigate the efficacy of 640-slice multidetector computed tomography (MDCT) for detecting osteo-odonto laminar resorption in the osteo-odonto-keratoprosthesis (OOKP) compared with the current standard 32-slice MDCT. Explanted OOKP laminae and bone-dentine fragments were scanned using 640-slice MDCT (Aquilion ONE; Toshiba) and 32-slice MDCT (LightSpeed Pro32; GE Healthcare). Pertinent comparisons including image quality, radiation dose, and scanning parameters were made. Benefits of 640-slice MDCT over 32-slice MDCT were shown. Key comparisons of 640-slice MDCT versus 32-slice MDCT included the following: percentage difference and correlation coefficient between radiological and anatomical measurements, 1.35% versus 3.67% and 0.9961 versus 0.9882, respectively; dose-length product, 63.50 versus 70.26; rotation time, 0.175 seconds versus 1.000 seconds; and detector coverage width, 16 cm versus 2 cm. Resorption of the osteo-odonto lamina after OOKP surgery can result in potentially sight-threatening complications, hence it warrants regular monitoring and timely intervention. MDCT remains the gold standard for radiological assessment of laminar resorption, which facilitates detection of subtle laminar changes earlier than the onset of clinical signs, thus indicating when preemptive measures can be taken. The 640-slice MDCT exhibits several advantages over traditional 32-slice MDCT. However, such benefits may not offset cost implications, except in rare cases, such as in young patients who might undergo years of radiation exposure.

  6. First in vivo head-to-head comparison of high-definition versus standard-definition stent imaging with 64-slice computed tomography.

    PubMed

    Fuchs, Tobias A; Stehli, Julia; Fiechter, Michael; Dougoud, Svetlana; Sah, Bert-Ram; Gebhard, Cathérine; Bull, Sacha; Gaemperli, Oliver; Kaufmann, Philipp A

    2013-08-01

    The aim of this study was to compare image quality characteristics from 64-slice high definition (HDCT) versus 64-slice standard definition CT (SDCT) for coronary stent imaging. In twenty-five stents of 14 patients, undergoing contrast-enhanced CCTA both on 64-slice SDCT (LightSpeedVCT, GE Healthcare) and HDCT (Discovery HD750, GE Healthcare), radiation dose, contrast, noise and stent characteristics were assessed. Two blinded observers graded stent image quality (score 1 = no, 2 = mild, 3 = moderate, and 4 = severe artefacts). All scans were reconstructed with increasing contributions of adaptive statistical iterative reconstruction (ASIR) blending (0, 20, 40, 60, 80 and 100 %). Image quality was significantly superior in HDCT versus SDCT (score 1.7 ± 0.5 vs. 2.7 ± 0.7; p < 0.05). Image noise was significantly higher in HDCT compared to SDCT irrespective of ASIR contributions (p < 0.05). Addition of 40 % ASIR or more reduced image noise significantly in both HDCT and SDCT. In HDCT in-stent luminal attenuation was significantly lower and mean measured in-stent luminal diameter was significantly larger (1.2 ± 0.4 mm vs. 0.8 ± 0.4 mm; p < 0.05) compared to SDCT. Radiation dose from HDCT was comparable to SDCT (1.8 ± 0.7 mSv vs. 1.7 ± 0.7 mSv; p = ns). Use of HDCT for coronary stent imaging reduces partial volume artefacts from stents yielding improved image quality versus SDCT at a comparable radiation dose.

  7. Induction of Social Behavior in Zebrafish: Live Versus Computer Animated Fish as Stimuli

    PubMed Central

    Qin, Meiying; Wong, Albert; Seguin, Diane

    2014-01-01

    Abstract The zebrafish offers an excellent compromise between system complexity and practical simplicity and has been suggested as a translational research tool for the analysis of human brain disorders associated with abnormalities of social behavior. Unlike laboratory rodents zebrafish are diurnal, thus visual cues may be easily utilized in the analysis of their behavior and brain function. Visual cues, including the sight of conspecifics, have been employed to induce social behavior in zebrafish. However, the method of presentation of these cues and the question of whether computer animated images versus live stimulus fish have differential effects have not been systematically analyzed. Here, we compare the effects of five stimulus presentation types: live conspecifics in the experimental tank or outside the tank, playback of video-recorded live conspecifics, computer animated images of conspecifics presented by two software applications, the previously employed General Fish Animator, and a new application Zebrafish Presenter. We report that all stimuli were equally effective and induced a robust social response (shoaling) manifesting as reduced distance between stimulus and experimental fish. We conclude that presentation of live stimulus fish, or 3D images, is not required and 2D computer animated images are sufficient to induce robust and consistent social behavioral responses in zebrafish. PMID:24575942

  8. Induction of social behavior in zebrafish: live versus computer animated fish as stimuli.

    PubMed

    Qin, Meiying; Wong, Albert; Seguin, Diane; Gerlai, Robert

    2014-06-01

    The zebrafish offers an excellent compromise between system complexity and practical simplicity and has been suggested as a translational research tool for the analysis of human brain disorders associated with abnormalities of social behavior. Unlike laboratory rodents zebrafish are diurnal, thus visual cues may be easily utilized in the analysis of their behavior and brain function. Visual cues, including the sight of conspecifics, have been employed to induce social behavior in zebrafish. However, the method of presentation of these cues and the question of whether computer animated images versus live stimulus fish have differential effects have not been systematically analyzed. Here, we compare the effects of five stimulus presentation types: live conspecifics in the experimental tank or outside the tank, playback of video-recorded live conspecifics, computer animated images of conspecifics presented by two software applications, the previously employed General Fish Animator, and a new application Zebrafish Presenter. We report that all stimuli were equally effective and induced a robust social response (shoaling) manifesting as reduced distance between stimulus and experimental fish. We conclude that presentation of live stimulus fish, or 3D images, is not required and 2D computer animated images are sufficient to induce robust and consistent social behavioral responses in zebrafish.

  9. Shortcomings of low-cost imaging systems for viewing computed radiographs.

    PubMed

    Ricke, J; Hänninen, E L; Zielinski, C; Amthauer, H; Stroszczynski, C; Liebig, T; Wolf, M; Hosten, N

    2000-01-01

    To assess potential advantages of a new PC-based viewing tool featuring image post-processing for viewing computed radiographs on low-cost hardware (PC) with a common display card and color monitor, and to evaluate the effect of using color versus monochrome monitors. Computed radiographs of a statistical phantom were viewed on a PC, with and without post-processing (spatial frequency and contrast processing), employing a monochrome or a color monitor. Findings were compared with the viewing on a radiological Workstation and evaluated with ROC analysis. Image post-processing improved the perception of low-contrast details significantly irrespective of the monitor used. No significant difference in perception was observed between monochrome and color monitors. The review at the radiological Workstation was superior to the review done using the PC with image processing. Lower quality hardware (graphic card and monitor) used in low cost PCs negatively affects perception of low-contrast details in computed radiographs. In this situation, it is highly recommended to use spatial frequency and contrast processing. No significant quality gain has been observed for the high-end monochrome monitor compared to the color display. However, the color monitor was affected stronger by high ambient illumination.

  10. Parallel Computing for the Computed-Tomography Imaging Spectrometer

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon

    2008-01-01

    This software computes the tomographic reconstruction of spatial-spectral data from raw detector images of the Computed-Tomography Imaging Spectrometer (CTIS), which enables transient-level, multi-spectral imaging by capturing spatial and spectral information in a single snapshot.

  11. Medical image computing for computer-supported diagnostics and therapy. Advances and perspectives.

    PubMed

    Handels, H; Ehrhardt, J

    2009-01-01

    Medical image computing has become one of the most challenging fields in medical informatics. In image-based diagnostics of the future software assistance will become more and more important, and image analysis systems integrating advanced image computing methods are needed to extract quantitative image parameters to characterize the state and changes of image structures of interest (e.g. tumors, organs, vessels, bones etc.) in a reproducible and objective way. Furthermore, in the field of software-assisted and navigated surgery medical image computing methods play a key role and have opened up new perspectives for patient treatment. However, further developments are needed to increase the grade of automation, accuracy, reproducibility and robustness. Moreover, the systems developed have to be integrated into the clinical workflow. For the development of advanced image computing systems methods of different scientific fields have to be adapted and used in combination. The principal methodologies in medical image computing are the following: image segmentation, image registration, image analysis for quantification and computer assisted image interpretation, modeling and simulation as well as visualization and virtual reality. Especially, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients and will gain importance in diagnostic and therapy of the future. From a methodical point of view the authors identify the following future trends and perspectives in medical image computing: development of optimized application-specific systems and integration into the clinical workflow, enhanced computational models for image analysis and virtual reality training systems, integration of different image computing methods, further integration of multimodal image data and biosignals and advanced methods for 4D medical image computing. The development of image analysis systems for diagnostic support or

  12. Computational imaging of sperm locomotion.

    PubMed

    Daloglu, Mustafa Ugur; Ozcan, Aydogan

    2017-08-01

    Not only essential for scientific research, but also in the analysis of male fertility and for animal husbandry, sperm tracking and characterization techniques have been greatly benefiting from computational imaging. Digital image sensors, in combination with optical microscopy tools and powerful computers, have enabled the use of advanced detection and tracking algorithms that automatically map sperm trajectories and calculate various motility parameters across large data sets. Computational techniques are driving the field even further, facilitating the development of unconventional sperm imaging and tracking methods that do not rely on standard optical microscopes and objective lenses, which limit the field of view and volume of the semen sample that can be imaged. As an example, a holographic on-chip sperm imaging platform, only composed of a light-emitting diode and an opto-electronic image sensor, has emerged as a high-throughput, low-cost and portable alternative to lens-based traditional sperm imaging and tracking methods. In this approach, the sample is placed very close to the image sensor chip, which captures lensfree holograms generated by the interference of the background illumination with the light scattered from sperm cells. These holographic patterns are then digitally processed to extract both the amplitude and phase information of the spermatozoa, effectively replacing the microscope objective lens with computation. This platform has further enabled high-throughput 3D imaging of spermatozoa with submicron 3D positioning accuracy in large sample volumes, revealing various rare locomotion patterns. We believe that computational chip-scale sperm imaging and 3D tracking techniques will find numerous opportunities in both sperm related research and commercial applications. © The Authors 2017. Published by Oxford University Press on behalf of Society for the Study of Reproduction. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. Clinical Evaluation of Spatial Accuracy of a Fusion Imaging Technique Combining Previously Acquired Computed Tomography and Real-Time Ultrasound for Imaging of Liver Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hakime, Antoine, E-mail: thakime@yahoo.com; Deschamps, Frederic; Garcia Marques de Carvalho, Enio

    2011-04-15

    Purpose: This study was designed to evaluate the spatial accuracy of matching volumetric computed tomography (CT) data of hepatic metastases with real-time ultrasound (US) using a fusion imaging system (VNav) according to different clinical settings. Methods: Twenty-four patients with one hepatic tumor identified on enhanced CT and US were prospectively enrolled. A set of three landmarks markers was chosen on CT and US for image registration. US and CT images were then superimposed using the fusion imaging display mode. The difference in spatial location between the tumor visible on the CT and the US on the overlay images (reviewer no.more » 1, comment no. 2) was measured in the lateral, anterior-posterior, and vertical axis. The maximum difference (Dmax) was evaluated for different predictive factors.CT performed 1-30 days before registration versus immediately before. Use of general anesthesia for CT and US versus no anesthesia.Anatomic landmarks versus landmarks that include at least one nonanatomic structure, such as a cyst or a calcificationResultsOverall, Dmax was 11.53 {+-} 8.38 mm. Dmax was 6.55 {+-} 7.31 mm with CT performed immediately before VNav versus 17.4 {+-} 5.18 with CT performed 1-30 days before (p < 0.0001). Dmax was 7.05 {+-} 6.95 under general anesthesia and 16.81 {+-} 6.77 without anesthesia (p < 0.0015). Landmarks including at least one nonanatomic structure increase Dmax of 5.2 mm (p < 0.0001). The lowest Dmax (1.9 {+-} 1.4 mm) was obtained when CT and VNav were performed under general anesthesia, one immediately after the other. Conclusions: VNav is accurate when adequate clinical setup is carefully selected. Only under these conditions (reviewer no. 2), liver tumors not identified on US can be accurately targeted for biopsy or radiofrequency ablation using fusion imaging.« less

  14. Image quality in low-dose coronary computed tomography angiography with a new high-definition CT scanner.

    PubMed

    Kazakauskaite, Egle; Husmann, Lars; Stehli, Julia; Fuchs, Tobias; Fiechter, Michael; Klaeser, Bernd; Ghadri, Jelena R; Gebhard, Catherine; Gaemperli, Oliver; Kaufmann, Philipp A

    2013-02-01

    A new generation of high definition computed tomography (HDCT) 64-slice devices complemented by a new iterative image reconstruction algorithm-adaptive statistical iterative reconstruction, offer substantially higher resolution compared to standard definition CT (SDCT) scanners. As high resolution confers higher noise we have compared image quality and radiation dose of coronary computed tomography angiography (CCTA) from HDCT versus SDCT. Consecutive patients (n = 93) underwent HDCT, and were compared to 93 patients who had previously undergone CCTA with SDCT matched for heart rate (HR), HR variability and body mass index (BMI). Tube voltage and current were adapted to the patient's BMI, using identical protocols in both groups. The image quality of all CCTA scans was evaluated by two independent readers in all coronary segments using a 4-point scale (1, excellent image quality; 2, blurring of the vessel wall; 3, image with artefacts but evaluative; 4, non-evaluative). Effective radiation dose was calculated from DLP multiplied by a conversion factor (0.014 mSv/mGy × cm). The mean image quality score from HDCT versus SDCT was comparable (2.02 ± 0.68 vs. 2.00 ± 0.76). Mean effective radiation dose did not significantly differ between HDCT (1.7 ± 0.6 mSv, range 1.0-3.7 mSv) and SDCT (1.9 ± 0.8 mSv, range 0.8-5.5 mSv; P = n.s.). HDCT scanners allow low-dose 64-slice CCTA scanning with higher resolution than SDCT but maintained image quality and equally low radiation dose. Whether this will translate into higher accuracy of HDCT for CAD detection remains to be evaluated.

  15. Architectures for single-chip image computing

    NASA Astrophysics Data System (ADS)

    Gove, Robert J.

    1992-04-01

    This paper will focus on the architectures of VLSI programmable processing components for image computing applications. TI, the maker of industry-leading RISC, DSP, and graphics components, has developed an architecture for a new-generation of image processors capable of implementing a plurality of image, graphics, video, and audio computing functions. We will show that the use of a single-chip heterogeneous MIMD parallel architecture best suits this class of processors--those which will dominate the desktop multimedia, document imaging, computer graphics, and visualization systems of this decade.

  16. Human Expertise Helps Computer Classify Images

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.

    1991-01-01

    Two-domain method of computational classification of images requires less computation than other methods for computational recognition, matching, or classification of images or patterns. Does not require explicit computational matching of features, and incorporates human expertise without requiring translation of mental processes of classification into language comprehensible to computer. Conceived to "train" computer to analyze photomicrographs of microscope-slide specimens of leucocytes from human peripheral blood to distinguish between specimens from healthy and specimens from traumatized patients.

  17. A Cognitive Semiotic Study of Students' Reading a Textless Image versus a Verbal Image

    ERIC Educational Resources Information Center

    Ali, Roaa Hasan; Aslaadi, Shatha

    2016-01-01

    This study explores fourth year college students' content retrieval from reading textless versus verbal images. Furthermore, it examines the extent to which the respondents comprehend and understand them. The procedures include selecting an image from the internet, designing a written test with its rubrics and exposing it to jury members to…

  18. Effect of object location on the density measurement in cone-beam computed tomography versus multislice computed tomography

    PubMed Central

    Eskandarloo, Amir; Abdinian, Mehrdad; Salemi, Fatemeh; Hashemzadeh, Zahra; Safaei, Mehran

    2012-01-01

    Background: Bone density measurement in a radiographic view is a valuable method for evaluating the density of bone quality before performing some dental procedures such as, dental implant placements. It seems that Cone-Beam Computed Tomography (CBCT) can be used as a diagnostic tool for evaluating the density of the bone, prior to any treatment, as the reported radiation dose in this method is minimal. The aim of this study is to investigate the effect of object location on the density measurement in CBCT versus Multislice computed tomography (CT). Materials and Methods: In an experimental study, three samples with similar dimensions, but different compositions, different densities (Polyethylene, Polyamide, Polyvinyl Chloride), and three bone pieces of different parts of the mandibular bone were imaged in three different positions by CBCT and Multislice CT sets. The average density value was computed for each sample in each position. Then the data obtained from each CBCT was converted to a Hounsfield unit and evaluated using a single variable T analysis. A P value <0.05 was considered to be significant. Results: The density in a Multislice CT is stable in the form of a Hounsfield Number, but this density is variable in the images acquired through CBCT, and the change in the position results in significant changes in the density. In this study, a statistically significant difference (P value = 0.000) has been observed for the position of the sample and its density in CBCT in comparison to Multislice CT. Conclusions: Density values in CBCT are not real because they are affected by the position of the object in the machine. PMID:23814567

  19. Computational Ghost Imaging for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Erkmen, Baris I.

    2012-01-01

    This work relates to the generic problem of remote active imaging; that is, a source illuminates a target of interest and a receiver collects the scattered light off the target to obtain an image. Conventional imaging systems consist of an imaging lens and a high-resolution detector array [e.g., a CCD (charge coupled device) array] to register the image. However, conventional imaging systems for remote sensing require high-quality optics and need to support large detector arrays and associated electronics. This results in suboptimal size, weight, and power consumption. Computational ghost imaging (CGI) is a computational alternative to this traditional imaging concept that has a very simple receiver structure. In CGI, the transmitter illuminates the target with a modulated light source. A single-pixel (bucket) detector collects the scattered light. Then, via computation (i.e., postprocessing), the receiver can reconstruct the image using the knowledge of the modulation that was projected onto the target by the transmitter. This way, one can construct a very simple receiver that, in principle, requires no lens to image a target. Ghost imaging is a transverse imaging modality that has been receiving much attention owing to a rich interconnection of novel physical characteristics and novel signal processing algorithms suitable for active computational imaging. The original ghost imaging experiments consisted of two correlated optical beams traversing distinct paths and impinging on two spatially-separated photodetectors: one beam interacts with the target and then illuminates on a single-pixel (bucket) detector that provides no spatial resolution, whereas the other beam traverses an independent path and impinges on a high-resolution camera without any interaction with the target. The term ghost imaging was coined soon after the initial experiments were reported, to emphasize the fact that by cross-correlating two photocurrents, one generates an image of the target. In

  20. Imaging discordance between hepatic angiography versus Tc-99m-MAA SPECT/CT: a case series, technical discussion and clinical implications.

    PubMed

    Kao, Yung Hsiang; Tan, Eik Hock; Teo, Terence Kiat Beng; Ng, Chee Eng; Goh, Soon Whatt

    2011-11-01

    During pre-therapy evaluation for yttrium-90 (Y-90) radioembolization, it is uncommon to find severe imaging discordance between hepatic angiography versus technetium-99m-macroaggregated albumin (Tc-99m-MAA) single photon emission computed tomography with integrated low-dose CT (SPECT/CT). The reasons for severe imaging discordance are unclear, and literature is scarce. We describe 3 patients with severe imaging discordance, whereby tumor angiographic contrast hypervascularity was markedly mismatched to the corresponding Tc-99m-MAA SPECT/CT, and its clinical impact. The incidence of severe imaging discordance at our institution was 4% (3 of 74 cases). We postulate that imaging discordance could be due to a combination of 3 factors: (1) different injection rates between soluble contrast molecules versus Tc-99m-MAA; (2) different arterial flow hemodynamics between soluble contrast molecules versus Tc-99m-MAA; (3) eccentric release position of Tc-99m-MAA due to microcatheter tip location, inadvertently selecting non-target microparticle trajectories. Tc-99m-MAA SPECT/CT more accurately represents hepatic microparticle biodistribution than soluble contrast hepatic angiography and should be a key criterion in patient selection for Y-90 radioembolization. Tc-99m-MAA SPECT/CT provides more information than planar scintigraphy to guide radiation planning and clinical decision making. Severe imaging discordance at pre-therapy evaluation is ominous and should be followed up by changes to the final vascular approach during Y-90 radioembolization.

  1. Computing Intrinsic Images.

    DTIC Science & Technology

    1986-08-01

    most of the algorithms fail when applied to real images. (2) Usually the constraints from the geometry and the physics of the problem are not enough...large subset of real images), and so most of the algorithms fail when applied to real images. (2) Usually the constraints from the geometry and the...constraints from the geometry and the physics of the problem are not enough to guarantee uniqueness of the computed parameters. In this case, strong

  2. Investigation of four-dimensional computed tomography-based pulmonary ventilation imaging in patients with emphysematous lung regions

    NASA Astrophysics Data System (ADS)

    Yamamoto, Tokihiro; Kabus, Sven; Klinder, Tobias; Lorenz, Cristian; von Berg, Jens; Blaffert, Thomas; Loo, Billy W., Jr.; Keall, Paul J.

    2011-04-01

    A pulmonary ventilation imaging technique based on four-dimensional (4D) computed tomography (CT) has advantages over existing techniques. However, physiologically accurate 4D-CT ventilation imaging has not been achieved in patients. The purpose of this study was to evaluate 4D-CT ventilation imaging by correlating ventilation with emphysema. Emphysematous lung regions are less ventilated and can be used as surrogates for low ventilation. We tested the hypothesis: 4D-CT ventilation in emphysematous lung regions is significantly lower than in non-emphysematous regions. Four-dimensional CT ventilation images were created for 12 patients with emphysematous lung regions as observed on CT, using a total of four combinations of two deformable image registration (DIR) algorithms: surface-based (DIRsur) and volumetric (DIRvol), and two metrics: Hounsfield unit (HU) change (VHU) and Jacobian determinant of deformation (VJac), yielding four ventilation image sets per patient. Emphysematous lung regions were detected by density masking. We tested our hypothesis using the one-tailed t-test. Visually, different DIR algorithms and metrics yielded spatially variant 4D-CT ventilation images. The mean ventilation values in emphysematous lung regions were consistently lower than in non-emphysematous regions for all the combinations of DIR algorithms and metrics. VHU resulted in statistically significant differences for both DIRsur (0.14 ± 0.14 versus 0.29 ± 0.16, p = 0.01) and DIRvol (0.13 ± 0.13 versus 0.27 ± 0.15, p < 0.01). However, VJac resulted in non-significant differences for both DIRsur (0.15 ± 0.07 versus 0.17 ± 0.08, p = 0.20) and DIRvol (0.17 ± 0.08 versus 0.19 ± 0.09, p = 0.30). This study demonstrated the strong correlation between the HU-based 4D-CT ventilation and emphysema, which indicates the potential for HU-based 4D-CT ventilation imaging to achieve high physiologic accuracy. A further study is needed to confirm these results.

  3. Advances in medical image computing.

    PubMed

    Tolxdorff, T; Deserno, T M; Handels, H; Meinzer, H-P

    2009-01-01

    Medical image computing has become a key technology in high-tech applications in medicine and an ubiquitous part of modern imaging systems and the related processes of clinical diagnosis and intervention. Over the past years significant progress has been made in the field, both on methodological and on application level. Despite this progress there are still big challenges to meet in order to establish image processing routinely in health care. In this issue, selected contributions of the German Conference on Medical Image Processing (BVM) are assembled to present latest advances in the field of medical image computing. The winners of scientific awards of the German Conference on Medical Image Processing (BVM) 2008 were invited to submit a manuscript on their latest developments and results for possible publication in Methods of Information in Medicine. Finally, seven excellent papers were selected to describe important aspects of recent advances in the field of medical image processing. The selected papers give an impression of the breadth and heterogeneity of new developments. New methods for improved image segmentation, non-linear image registration and modeling of organs are presented together with applications of image analysis methods in different medical disciplines. Furthermore, state-of-the-art tools and techniques to support the development and evaluation of medical image processing systems in practice are described. The selected articles describe different aspects of the intense development in medical image computing. The image processing methods presented enable new insights into the patient's image data and have the future potential to improve medical diagnostics and patient treatment.

  4. Image analysis and modeling in medical image computing. Recent developments and advances.

    PubMed

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of LĂĽbeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body

  5. Validation of the Rotorcraft Flight Simulation Program (C81) Using Operational Loads Survey Flight Test Data.

    DTIC Science & Technology

    1980-07-01

    implementing a first- order differential equation for the horsepower-available equation, patterned after the model in the hybrid-computer, version of C81...aircraft, so XMR(36) through XMR(40) were set to 0.0. The main rotor differential nacelle flat plate drag area was estimated to be due to two components...312040 000.N N0 133800030000S0000 00joo SI00 Vj . . . 0 Cf0 4~~~ e% &U0 000000 0 C 0 U - 0000.’ N1E Moo i6fo I u04000000 00O 0o I00000000000000000000 1

  6. Optical image hiding based on computational ghost imaging

    NASA Astrophysics Data System (ADS)

    Wang, Le; Zhao, Shengmei; Cheng, Weiwen; Gong, Longyan; Chen, Hanwu

    2016-05-01

    Imaging hiding schemes play important roles in now big data times. They provide copyright protections of digital images. In the paper, we propose a novel image hiding scheme based on computational ghost imaging to have strong robustness and high security. The watermark is encrypted with the configuration of a computational ghost imaging system, and the random speckle patterns compose a secret key. Least significant bit algorithm is adopted to embed the watermark and both the second-order correlation algorithm and the compressed sensing (CS) algorithm are used to extract the watermark. The experimental and simulation results show that the authorized users can get the watermark with the secret key. The watermark image could not be retrieved when the eavesdropping ratio is less than 45% with the second-order correlation algorithm, whereas it is less than 20% with the TVAL3 CS reconstructed algorithm. In addition, the proposed scheme is robust against the 'salt and pepper' noise and image cropping degradations.

  7. Emerging Computer Media: On Image Interaction

    NASA Astrophysics Data System (ADS)

    Lippman, Andrew B.

    1982-01-01

    Emerging technologies such as inexpensive, powerful local computing, optical digital videodiscs, and the technologies of human-machine interaction are initiating a revolution in both image storage systems and image interaction systems. This paper will present a review of new approaches to computer media predicated upon three dimensional position sensing, speech recognition, and high density image storage. Examples will be shown such as the Spatial Data Management Systems wherein the free use of place results in intuitively clear retrieval systems and potentials for image association; the Movie-Map, wherein inherently static media generate dynamic views of data, and conferencing work-in-progress wherein joint processing is stressed. Application to medical imaging will be suggested, but the primary emphasis is on the general direction of imaging and reference systems. We are passing the age of simple possibility of computer graphics and image porcessing and entering the age of ready usability.

  8. Computer simulation of reconstructed image for computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Yasuda, Tomoki; Kitamura, Mitsuru; Watanabe, Masachika; Tsumuta, Masato; Yamaguchi, Takeshi; Yoshikawa, Hiroshi

    2009-02-01

    This report presents the results of computer simulation images for image-type Computer-Generated Holograms (CGHs) observable under white light fabricated with an electron beam lithography system. The simulated image is obtained by calculating wavelength and intensity of diffracted light traveling toward the viewing point from the CGH. Wavelength and intensity of the diffracted light are calculated using FFT image generated from interference fringe data. Parallax image of CGH corresponding to the viewing point can be easily obtained using this simulation method. Simulated image from interference fringe data was compared with reconstructed image of real CGH with an Electron Beam (EB) lithography system. According to the result, the simulated image resembled the reconstructed image of the CGH closely in shape, parallax, coloring and shade. And, in accordance with the shape of the light sources the simulated images which were changed in chroma saturation and blur by using two kinds of simulations: the several light sources method and smoothing method. In addition, as the applications of the CGH, full-color CGH and CGH with multiple images were simulated. The result was that the simulated images of those CGHs closely resembled the reconstructed image of real CGHs.

  9. Computer Human Interaction for Image Information Systems.

    ERIC Educational Resources Information Center

    Beard, David Volk

    1991-01-01

    Presents an approach to developing viable image computer-human interactions (CHI) involving user metaphors for comprehending image data and methods for locating, accessing, and displaying computer images. A medical-image radiology workstation application is used as an example, and feedback and evaluation methods are discussed. (41 references) (LRW)

  10. Tuning the electronic and the crystalline structure of LaBi by pressure: From extreme magnetoresistance to superconductivity

    DOE PAGES

    Tafti, F. F.; Torikachvili, M. S.; Stillwell, R. L.; ...

    2017-01-10

    Here, extreme magnetoresistance (XMR) in topological semimetals is a recent discovery which attracts attention due to its robust appearance in a growing number of materials. To search for a relation between XMR and superconductivity, we study the effect of pressure on LaBi. By increasing pressure, we observe the disappearance of XMR followed by the appearance of superconductivity at P ≠3.5 GPa. We find a region of coexistence between superconductivity and XMR in LaBi in contrast to other superconducting XMR materials. The suppression of XMR is correlated with increasing zero-field resistance instead of decreasing in-field resistance. At higher pressures, Pmore » ≠11 GPa, we find a structural transition from the face-centered cubic lattice to a primitive tetragonal lattice, in agreement with theoretical predictions. The relationship between extreme magnetoresistance, superconductivity, and structural transition in LaBi is discussed.« less

  11. Diagnostic Imaging and Newer Modalities for Thoracic Diseases: PET/Computed Tomographic Imaging and Endobronchial Ultrasound for Staging and Its Implication for Lung Cancer.

    PubMed

    Counts, Sarah J; Kim, Anthony W

    2017-08-01

    Modalities to detect and characterize lung cancer are generally divided into those that are invasive [endobronchial ultrasound (EBUS), esophageal ultrasound (EUS), and electromagnetic navigational bronchoscopy (ENMB)] versus noninvasive [chest radiography (CXR), computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI)]. This chapter describes these modalities, the literature supporting their use, and delineates what tests to use to best evaluate the patient with lung cancer. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. High-field open versus short-bore magnetic resonance imaging of the spine: a randomized controlled comparison of image quality.

    PubMed

    Enders, Judith; Rief, Matthias; Zimmermann, Elke; Asbach, Patrick; Diederichs, Gerd; Wetz, Christoph; Siebert, Eberhard; Wagner, Moritz; Hamm, Bernd; Dewey, Marc

    2013-01-01

    The purpose of the present study was to compare the image quality of spinal magnetic resonance (MR) imaging performed on a high-field horizontal open versus a short-bore MR scanner in a randomized controlled study setup. Altogether, 93 (80% women, mean age 53) consecutive patients underwent spine imaging after random assignement to a 1-T horizontal open MR scanner with a vertical magnetic field or a 1.5-T short-bore MR scanner. This patient subset was part of a larger cohort. Image quality was assessed by determining qualitative parameters, signal-to-noise (SNR) and contrast-to-noise ratios (CNR), and quantitative contour sharpness. The image quality parameters were higher for short-bore MR imaging. Regarding all sequences, the relative differences were 39% for the mean overall qualitative image quality, 53% for the mean SNR values, and 34-37% for the quantitative contour sharpness (P<0.0001). The CNR values were also higher for images obtained with the short-bore MR scanner. No sequence was of very poor (nondiagnostic) image quality. Scanning times were significantly longer for examinations performed on the open MR scanner (mean: 32±22 min versus 20±9 min; P<0.0001). In this randomized controlled comparison of spinal MR imaging with an open versus a short-bore scanner, short-bore MR imaging revealed considerably higher image quality with shorter scanning times. ClinicalTrials.gov NCT00715806.

  13. High-Field Open versus Short-Bore Magnetic Resonance Imaging of the Spine: A Randomized Controlled Comparison of Image Quality

    PubMed Central

    Zimmermann, Elke; Asbach, Patrick; Diederichs, Gerd; Wetz, Christoph; Siebert, Eberhard; Wagner, Moritz; Hamm, Bernd; Dewey, Marc

    2013-01-01

    Background The purpose of the present study was to compare the image quality of spinal magnetic resonance (MR) imaging performed on a high-field horizontal open versus a short-bore MR scanner in a randomized controlled study setup. Methods Altogether, 93 (80% women, mean age 53) consecutive patients underwent spine imaging after random assignement to a 1-T horizontal open MR scanner with a vertical magnetic field or a 1.5-T short-bore MR scanner. This patient subset was part of a larger cohort. Image quality was assessed by determining qualitative parameters, signal-to-noise (SNR) and contrast-to-noise ratios (CNR), and quantitative contour sharpness. Results The image quality parameters were higher for short-bore MR imaging. Regarding all sequences, the relative differences were 39% for the mean overall qualitative image quality, 53% for the mean SNR values, and 34–37% for the quantitative contour sharpness (P<0.0001). The CNR values were also higher for images obtained with the short-bore MR scanner. No sequence was of very poor (nondiagnostic) image quality. Scanning times were significantly longer for examinations performed on the open MR scanner (mean: 32±22 min versus 20±9 min; P<0.0001). Conclusions In this randomized controlled comparison of spinal MR imaging with an open versus a short-bore scanner, short-bore MR imaging revealed considerably higher image quality with shorter scanning times. Trial Registration ClinicalTrials.gov NCT00715806 PMID:24391767

  14. Computational efficiency improvements for image colorization

    NASA Astrophysics Data System (ADS)

    Yu, Chao; Sharma, Gaurav; Aly, Hussein

    2013-03-01

    We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.

  15. Advances in computer imaging/applications in facial plastic surgery.

    PubMed

    Papel, I D; Jiannetto, D F

    1999-01-01

    Rapidly progressing computer technology, ever-increasing expectations of patients, and a confusing medicolegal environment requires a clarification of the role of computer imaging/applications. Advances in computer technology and its applications are reviewed. A brief historical discussion is included for perspective. Improvements in both hardware and software with the advent of digital imaging have allowed great increases in speed and accuracy in patient imaging. This facilitates doctor-patient communication and possibly realistic patient expectations. Patients seeking cosmetic surgery now often expect preoperative imaging. Although society in general has become more litigious, a literature search up to 1998 reveals no lawsuits directly involving computer imaging. It appears that conservative utilization of computer imaging by the facial plastic surgeon may actually reduce liability and promote communication. Recent advances have significantly enhanced the value of computer imaging in the practice of facial plastic surgery. These technological advances in computer imaging appear to contribute a useful technique for the practice of facial plastic surgery. Inclusion of computer imaging should be given serious consideration as an adjunct to clinical practice.

  16. Metasurface optics for full-color computational imaging.

    PubMed

    Colburn, Shane; Zhan, Alan; Majumdar, Arka

    2018-02-01

    Conventional imaging systems comprise large and expensive optical components that successively mitigate aberrations. Metasurface optics offers a route to miniaturize imaging systems by replacing bulky components with flat and compact implementations. The diffractive nature of these devices, however, induces severe chromatic aberrations, and current multiwavelength and narrowband achromatic metasurfaces cannot support full visible spectrum imaging (400 to 700 nm). We combine principles of both computational imaging and metasurface optics to build a system with a single metalens of numerical aperture ~0.45, which generates in-focus images under white light illumination. Our metalens exhibits a spectrally invariant point spread function that enables computational reconstruction of captured images with a single digital filter. This work connects computational imaging and metasurface optics and demonstrates the capabilities of combining these disciplines by simultaneously reducing aberrations and downsizing imaging systems using simpler optics.

  17. Image-Processing Software For A Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  18. Anticipated Ongoing Interaction versus Channel Effects of Relational Communication in Computer-Mediated Interaction.

    ERIC Educational Resources Information Center

    Walther, Joseph B.

    1994-01-01

    Assesses the related effects of anticipated future interaction and different communication media (computer-mediated versus face-to-face communication) on the communication of relational intimacy and composure. Shows that the assignment of long-term versus short-term partnerships has a larger impact on anticipated future interaction reported by…

  19. Computational scalability of large size image dissemination

    NASA Astrophysics Data System (ADS)

    Kooper, Rob; Bajcsy, Peter

    2011-01-01

    We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J.; Ghimire, N. J.; Jiang, J. S.

    Extremely large magnetoresistance (XMR) was recently discovered in YSb but its origin, along with that of many other XMR materials, is an active subject of debate. Here we demonstrate that YSb, with a cubic crystalline lattice and anisotropic bulk electron Fermi pockets, can be an excellent candidate for revealing the origin of XMR. We carried out angle dependent Shubnikov – de Haas quantum oscillation measurements to determine the volume and shape of the Fermi pockets. In addition, by investigating both Hall and longitudinal magnetoresistivities, we reveal that the origin of XMR in YSb lies in its carrier high mobility withmore » a diminishing Hall factor that is obtained from the ratio of the Hall and longitudinal magentoresistivities. The high mobility leads to a strong magnetic field dependence of the longitudinal magnetoconductivity while a diminishing Hall factor reveals the latent XMR hidden in the longitudinal magnetoconductivity whose inverse has a nearly quadratic magnetic-field dependence. The Hall factor highlights the deviation of the measured magnetoresistivity from its full potential value and provides a general formulation to reveal the origin of XMR behavior in high mobility materials and of nonsaturating MR behavior as a whole. Our approach can be readily applied to other XMR materials.« less

  1. Origin of the extremely large magnetoresistance in the semimetal YSb

    DOE PAGES

    Xu, J.; Ghimire, N. J.; Jiang, J. S.; ...

    2017-08-29

    Extremely large magnetoresistance (XMR) was recently discovered in YSb but its origin, along with that of many other XMR materials, is an active subject of debate. Here we demonstrate that YSb, with a cubic crystalline lattice and anisotropic bulk electron Fermi pockets, can be an excellent candidate for revealing the origin of XMR. We carried out angle dependent Shubnikov – de Haas quantum oscillation measurements to determine the volume and shape of the Fermi pockets. In addition, by investigating both Hall and longitudinal magnetoresistivities, we reveal that the origin of XMR in YSb lies in its carrier high mobility withmore » a diminishing Hall factor that is obtained from the ratio of the Hall and longitudinal magentoresistivities. The high mobility leads to a strong magnetic field dependence of the longitudinal magnetoconductivity while a diminishing Hall factor reveals the latent XMR hidden in the longitudinal magnetoconductivity whose inverse has a nearly quadratic magnetic-field dependence. The Hall factor highlights the deviation of the measured magnetoresistivity from its full potential value and provides a general formulation to reveal the origin of XMR behavior in high mobility materials and of nonsaturating MR behavior as a whole. Our approach can be readily applied to other XMR materials.« less

  2. A computational image analysis glossary for biologists.

    PubMed

    Roeder, Adrienne H K; Cunha, Alexandre; Burl, Michael C; Meyerowitz, Elliot M

    2012-09-01

    Recent advances in biological imaging have resulted in an explosion in the quality and quantity of images obtained in a digital format. Developmental biologists are increasingly acquiring beautiful and complex images, thus creating vast image datasets. In the past, patterns in image data have been detected by the human eye. Larger datasets, however, necessitate high-throughput objective analysis tools to computationally extract quantitative information from the images. These tools have been developed in collaborations between biologists, computer scientists, mathematicians and physicists. In this Primer we present a glossary of image analysis terms to aid biologists and briefly discuss the importance of robust image analysis in developmental studies.

  3. High-performance computing in image registration

    NASA Astrophysics Data System (ADS)

    Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro

    2012-10-01

    Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.

  4. Computational polarization difference underwater imaging based on image fusion

    NASA Astrophysics Data System (ADS)

    Han, Hongwei; Zhang, Xiaohui; Guan, Feng

    2016-01-01

    Polarization difference imaging can improve the quality of images acquired underwater, whether the background and veiling light are unpolarized or partial polarized. Computational polarization difference imaging technique which replaces the mechanical rotation of polarization analyzer and shortens the time spent to select the optimum orthogonal Ç and ⊥axes is the improvement of the conventional PDI. But it originally gets the output image by setting the weight coefficient manually to an identical constant for all pixels. In this paper, a kind of algorithm is proposed to combine the Q and U parameters of the Stokes vector through pixel-level image fusion theory based on non-subsample contourlet transform. The experimental system built by the green LED array with polarizer to illuminate a piece of flat target merged in water and the CCD with polarization analyzer to obtain target image under different angle is used to verify the effect of the proposed algorithm. The results showed that the output processed by our algorithm could show more details of the flat target and had higher contrast compared to original computational polarization difference imaging.

  5. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services.

    PubMed

    Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha

    2016-02-27

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  6. Performance management of high performance computing for medical image processing in Amazon Web Services

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-03-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  7. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services

    PubMed Central

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-01-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline. PMID:27127335

  8. Real-time computer treatment of THz passive device images with the high image quality

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  9. Computers are stepping stones to improved imaging.

    PubMed

    Freiherr, G

    1991-02-01

    Never before has the radiology industry embraced the computer with such enthusiasm. Graphics supercomputers as well as UNIX- and RISC-based computing platforms are turning up in every digital imaging modality and especially in systems designed to enhance and transmit images, says author Greg Freiherr on assignment for Computers in Healthcare at the Radiological Society of North America conference in Chicago.

  10. Image improvement and three-dimensional reconstruction using holographic image processing

    NASA Technical Reports Server (NTRS)

    Stroke, G. W.; Halioua, M.; Thon, F.; Willasch, D. H.

    1977-01-01

    Holographic computing principles make possible image improvement and synthesis in many cases of current scientific and engineering interest. Examples are given for the improvement of resolution in electron microscopy and 3-D reconstruction in electron microscopy and X-ray crystallography, following an analysis of optical versus digital computing in such applications.

  11. Radiography versus computed tomography for displacement assessment in calcaneal fractures.

    PubMed

    Ogawa, Brent K; Charlton, Timothy P; Thordarson, David B

    2009-10-01

    Coronal computed tomography (CT) scans are commonly used in fracture classification systems for calcaneus fractures. However, they may not accurately reflect the amount of fracture displacement. The purpose of this paper was to determine whether lateral radiographs provide superior assessment of the displacement of the posterior facet compared to coronal CT scans. Lateral radiographs of calcaneus fractures were compared with CT coronal images of the posterior facet in 30 displaced intra-articular calcaneus fractures. The average patient age was 39 years old. Using a Picture Archiving and Communication System (PACS), measurements were obtained to quantify the amount of displacement on the lateral radiograph and compared with the amount of depression on corresponding coronal CT scans. On lateral radiographs, the angle of the depressed portion of the posterior facet relative to the undersurface of the calcaneus averaged 28.2 degrees; Bohler's angle averaged 12.7 degrees. These numbers were poorly correlated (r = 0.25). In corresponding CT images from posterior to anterior, the difference in the amount of displacement of the lateral portion of the displaced articular facet versus the nondisplaced medial, constant fragment, was minimal and consistently underestimated the amount of displacement. Underestimation of the amount of depression and rotation of the posterior facet fragment was seen on the coronal CT scan. We attribute this finding to the combined rotation and depression of the posterior facet which may not be measured accurately with the typical semicoronal CT orientation. While sagittal reconstructed images would show this depression better, if they are unavailable we recommend using lateral radiographs to better gauge the amount of fracture displacement.

  12. MoTe2: An uncompensated semimetal with extremely large magnetoresistance

    NASA Astrophysics Data System (ADS)

    Thirupathaiah, S.; Jha, Rajveer; Pal, Banabir; Matias, J. S.; Das, P. Kumar; Sivakumar, P. K.; Vobornik, I.; Plumb, N. C.; Shi, M.; Ribeiro, R. A.; Sarma, D. D.

    2017-06-01

    Transition-metal dichalcogenides (WTe2 and MoTe2) have recently drawn much attention, because of the nonsaturating extremely large magnetoresistance (XMR) observed in these compounds in addition to the predictions of likely type-II Weyl semimetals. Contrary to the topological insulators or Dirac semimetals where XMR is linearly dependent on the field, in WTe2 and MoTe2 the XMR is nonlinearly dependent on the field, suggesting an entirely different mechanism. Electron-hole compensation has been proposed as a mechanism of this nonsaturating XMR in WTe2, while it is yet to be clear in the case of MoTe2 which has an identical crystal structure of WTe2 at low temperatures. In this Rapid Communication, we report low-energy electronic structure and Fermi surface topology of MoTe2 using angle-resolved photoemission spectrometry (ARPES) technique and first-principles calculations, and compare them with that of WTe2 to understand the mechanism of XMR. Our measurements demonstrate that MoTe2 is an uncompensated semimetal, contrary to WTe2 in which compensated electron-hole pockets have been identified, ruling out the applicability of charge compensation theory for the nonsaturating XMR in MoTe2. In this context, we also discuss the applicability of other existing conjectures on the XMR of these compounds.

  13. Ultrasound-guided versus computed tomography-scan guided biopsy of pleural-based lung lesions

    PubMed Central

    Khosla, Rahul; McLean, Anna W; Smith, Jessica A

    2016-01-01

    Background: Computed tomography (CT) guided biopsies have long been the standard technique to obtain tissue from the thoracic cavity and is traditionally performed by interventional radiologists. Ultrasound (US) guided biopsy of pleural-based lesions, performed by pulmonologists is gaining popularity and has the advantage of multi-planar imaging, real-time technique, and the absence of radiation exposure to patients. In this study, we aim to determine the diagnostic accuracy, the time to diagnosis after the initial consult placement, and the complications rates between the two different modalities. Methods: A retrospective study of electronic medical records was done of patients who underwent CT-guided biopsies and US-guided biopsies for pleural-based lesions between 2005 and 2014 and the data collected were analyzed for comparing the two groups. Results: A total of 158 patients underwent 162 procedures during the study period. 86 patients underwent 89 procedures in the US group, and 72 patients underwent 73 procedures in the CT group. The overall yield in the US group was 82/89 (92.1%) versus 67/73 (91.8%) in the CT group (P = 1.0). Average days to the procedure was 7.2 versus 17.5 (P = 0.00001) in the US and CT group, respectively. Complication rate was higher in CT group 17/73 (23.3%) versus 1/89 (1.1%) in the US group (P < 0.0001). Conclusions: For pleural-based lesions the diagnostic accuracy of US guided biopsy is similar to that of CT-guided biopsy, with a lower complication rate and a significantly reduced time to the procedure. PMID:27625440

  14. Adaptive Statistical Iterative Reconstruction-V Versus Adaptive Statistical Iterative Reconstruction: Impact on Dose Reduction and Image Quality in Body Computed Tomography.

    PubMed

    Gatti, Marco; Marchisio, Filippo; Fronda, Marco; Rampado, Osvaldo; Faletti, Riccardo; Bergamasco, Laura; Ropolo, Roberto; Fonio, Paolo

    The aim of this study was to evaluate the impact on dose reduction and image quality of the new iterative reconstruction technique: adaptive statistical iterative reconstruction (ASIR-V). Fifty consecutive oncologic patients acted as case controls undergoing during their follow-up a computed tomography scan both with ASIR and ASIR-V. Each study was analyzed in a double-blinded fashion by 2 radiologists. Both quantitative and qualitative analyses of image quality were conducted. Computed tomography scanner radiation output was 38% (29%-45%) lower (P < 0.0001) for the ASIR-V examinations than for the ASIR ones. The quantitative image noise was significantly lower (P < 0.0001) for ASIR-V. Adaptive statistical iterative reconstruction-V had a higher performance for the subjective image noise (P = 0.01 for 5 mm and P = 0.009 for 1.25 mm), the other parameters (image sharpness, diagnostic acceptability, and overall image quality) being similar (P > 0.05). Adaptive statistical iterative reconstruction-V is a new iterative reconstruction technique that has the potential to provide image quality equal to or greater than ASIR, with a dose reduction around 40%.

  15. Computational imaging of light in flight

    NASA Astrophysics Data System (ADS)

    Hullin, Matthias B.

    2014-10-01

    Many computer vision tasks are hindered by image formation itself, a process that is governed by the so-called plenoptic integral. By averaging light falling into the lens over space, angle, wavelength and time, a great deal of information is irreversibly lost. The emerging idea of transient imaging operates on a time resolution fast enough to resolve non-stationary light distributions in real-world scenes. It enables the discrimination of light contributions by the optical path length from light source to receiver, a dimension unavailable in mainstream imaging to date. Until recently, such measurements used to require high-end optical equipment and could only be acquired under extremely restricted lab conditions. To address this challenge, we introduced a family of computational imaging techniques operating on standard time-of-flight image sensors, for the first time allowing the user to "film" light in flight in an affordable, practical and portable way. Just as impulse responses have proven a valuable tool in almost every branch of science and engineering, we expect light-in-flight analysis to impact a wide variety of applications in computer vision and beyond.

  16. Legal issues of computer imaging in plastic surgery: a primer.

    PubMed

    Chávez, A E; Dagum, P; Koch, R J; Newman, J P

    1997-11-01

    Although plastic surgeons are increasingly incorporating computer imaging techniques into their practices, many fear the possibility of legally binding themselves to achieve surgical results identical to those reflected in computer images. Computer imaging allows surgeons to manipulate digital photographs of patients to project possible surgical outcomes. Some of the many benefits imaging techniques pose include improving doctor-patient communication, facilitating the education and training of residents, and reducing administrative and storage costs. Despite the many advantages computer imaging systems offer, however, surgeons understandably worry that imaging systems expose them to immense legal liability. The possible exploitation of computer imaging by novice surgeons as a marketing tool, coupled with the lack of consensus regarding the treatment of computer images, adds to the concern of surgeons. A careful analysis of the law, however, reveals that surgeons who use computer imaging carefully and conservatively, and adopt a few simple precautions, substantially reduce their vulnerability to legal claims. In particular, surgeons face possible claims of implied contract, failure to instruct, and malpractice from their use or failure to use computer imaging. Nevertheless, legal and practical obstacles frustrate each of those causes of actions. Moreover, surgeons who incorporate a few simple safeguards into their practice may further reduce their legal susceptibility.

  17. Three-dimensional dynamic thermal imaging of structural flaws by dual-band infrared computed tomography

    NASA Astrophysics Data System (ADS)

    DelGrande, Nancy; Dolan, Kenneth W.; Durbin, Philip F.; Gorvad, Michael R.; Kornblum, B. T.; Perkins, Dwight E.; Schneberk, Daniel J.; Shapiro, Arthur B.

    1993-11-01

    We discuss three-dimensional dynamic thermal imaging of structural flaws using dual-band infrared (DBIR) computed tomography. Conventional (single-band) thermal imaging is difficult to interpret. It yields imprecise or qualitative information (e.g., when subsurface flaws produce weak heat flow anomalies masked by surface clutter). We use the DBIR imaging technique to clarify interpretation. We capture the time history of surface temperature difference patterns at the epoxy-glue disbond site of a flash-heated lap joint. This type of flawed structure played a significant role in causing damage to the Aloha Aircraft fuselage on the aged Boeing 737 jetliner. The magnitude of surface-temperature differences versus time for 0.1 mm air layer compared to 0.1 mm glue layer, varies from 0.2 to 1.6 degree(s)C, for simultaneously scanned front and back surfaces. The scans are taken every 42 ms from 0 to 8 s after the heat flash. By ratioing 3 - 5 micrometers and 8 - 12 micrometers DBIR images, we located surface temperature patterns from weak heat flow anomalies at the disbond site and remove the emissivity mask from surface paint of roughness variations. Measurements compare well with calculations based on TOPAX3D, a three-dimensional, finite element computer model. We combine infrared, ultrasound and x-ray imaging methods to study heat transfer, bond quality and material differences associated with the lap joint disbond site.

  18. Observation of open-orbit Fermi surface topology in the extremely large magnetoresistance semimetal MoAs2

    NASA Astrophysics Data System (ADS)

    Lou, R.; Xu, Y. F.; Zhao, L.-X.; Han, Z.-Q.; Guo, P.-J.; Li, M.; Wang, J.-C.; Fu, B.-B.; Liu, Z.-H.; Huang, Y.-B.; Richard, P.; Qian, T.; Liu, K.; Chen, G.-F.; Weng, H. M.; Ding, H.; Wang, S.-C.

    2017-12-01

    While recent advances in band theory and sample growth have expanded the series of extremely large magnetoresistance (XMR) semimetals in transition-metal dipnictides T m P n2 (T m =Ta , Nb; P n =P , As, Sb), the experimental study on their electronic structure and the origin of XMR is still absent. Here, using angle-resolved photoemission spectroscopy combined with first-principles calculations and magnetotransport measurements, we performed a comprehensive investigation on MoAs2, which is isostructural to the T m P n2 family and also exhibits quadratic XMR. We resolve a clear band structure well agreeing with the predictions. Intriguingly, the unambiguously observed Fermi surfaces (FSs) are dominated by an open-orbit topology extending along both the [100] and [001] directions in the three-dimensional Brillouin zone. We further reveal the trivial topological nature of MoAs2 by bulk parity analysis. Based on these results, we examine the proposed XMR mechanisms in other semimetals, and conclusively ascribe the origin of quadratic XMR in MoAs2 to the carriers motion on the FSs with dominant open-orbit topology, innovating in the understanding of quadratic XMR in semimetals.

  19. Computer-assisted versus non-computer-assisted preoperative planning of corrective osteotomy for extra-articular distal radius malunions: a randomized controlled trial.

    PubMed

    Leong, Natalie L; Buijze, Geert A; Fu, Eric C; Stockmans, Filip; Jupiter, Jesse B

    2010-12-14

    Malunion is the most common complication of distal radius fracture. It has previously been demonstrated that there is a correlation between the quality of anatomical correction and overall wrist function. However, surgical correction can be difficult because of the often complex anatomy associated with this condition. Computer assisted surgical planning, combined with patient-specific surgical guides, has the potential to improve pre-operative understanding of patient anatomy as well as intra-operative accuracy. For patients with malunion of the distal radius fracture, this technology could significantly improve clinical outcomes that largely depend on the quality of restoration of normal anatomy. Therefore, the objective of this study is to compare patient outcomes after corrective osteotomy for distal radius malunion with and without preoperative computer-assisted planning and peri-operative patient-specific surgical guides. This study is a multi-center randomized controlled trial of conventional planning versus computer-assisted planning for surgical correction of distal radius malunion. Adult patients with extra-articular malunion of the distal radius will be invited to enroll in our study. After providing informed consent, subjects will be randomized to two groups: one group will receive corrective surgery with conventional preoperative planning, while the other will receive corrective surgery with computer-assisted pre-operative planning and peri-operative patient specific surgical guides. In the computer-assisted planning group, a CT scan of the affected forearm as well as the normal, contralateral forearm will be obtained. The images will be used to construct a 3D anatomical model of the defect and patient-specific surgical guides will be manufactured. Outcome will be measured by DASH and PRWE scores, grip strength, radiographic measurements, and patient satisfaction at 3, 6, and 12 months postoperatively. Computer-assisted surgical planning, combined with

  20. Computation of mass-density images from x-ray refraction-angle images.

    PubMed

    Wernick, Miles N; Yang, Yongyi; Mondal, Indrasis; Chapman, Dean; Hasnah, Moumen; Parham, Christopher; Pisano, Etta; Zhong, Zhong

    2006-04-07

    In this paper, we investigate the possibility of computing quantitatively accurate images of mass density variations in soft tissue. This is a challenging task, because density variations in soft tissue, such as the breast, can be very subtle. Beginning from an image of refraction angle created by either diffraction-enhanced imaging (DEI) or multiple-image radiography (MIR), we estimate the mass-density image using a constrained least squares (CLS) method. The CLS algorithm yields accurate density estimates while effectively suppressing noise. Our method improves on an analytical method proposed by Hasnah et al (2005 Med. Phys. 32 549-52), which can produce significant artefacts when even a modest level of noise is present. We present a quantitative evaluation study to determine the accuracy with which mass density can be determined in the presence of noise. Based on computer simulations, we find that the mass-density estimation error can be as low as a few per cent for typical density variations found in the breast. Example images computed from less-noisy real data are also shown to illustrate the feasibility of the technique. We anticipate that density imaging may have application in assessment of water content of cartilage resulting from osteoarthritis, in evaluation of bone density, and in mammographic interpretation.

  1. Dual-Energy Computed Tomography in Cardiothoracic Vascular Imaging.

    PubMed

    De Santis, Domenico; Eid, Marwen; De Cecco, Carlo N; Jacobs, Brian E; Albrecht, Moritz H; Varga-Szemes, Akos; Tesche, Christian; Caruso, Damiano; Laghi, Andrea; Schoepf, Uwe Joseph

    2018-07-01

    Dual energy computed tomography is becoming increasingly widespread in clinical practice. It can expand on the traditional density-based data achievable with single energy computed tomography by adding novel applications to help reach a more accurate diagnosis. The implementation of this technology in cardiothoracic vascular imaging allows for improved image contrast, metal artifact reduction, generation of virtual unenhanced images, virtual calcium subtraction techniques, cardiac and pulmonary perfusion evaluation, and plaque characterization. The improved diagnostic performance afforded by dual energy computed tomography is not associated with an increased radiation dose. This review provides an overview of dual energy computed tomography cardiothoracic vascular applications. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Optical image encryption via high-quality computational ghost imaging using iterative phase retrieval

    NASA Astrophysics Data System (ADS)

    Liansheng, Sui; Yin, Cheng; Bing, Li; Ailing, Tian; Krishna Asundi, Anand

    2018-07-01

    A novel computational ghost imaging scheme based on specially designed phase-only masks, which can be efficiently applied to encrypt an original image into a series of measured intensities, is proposed in this paper. First, a Hadamard matrix with a certain order is generated, where the number of elements in each row is equal to the size of the original image to be encrypted. Each row of the matrix is rearranged into the corresponding 2D pattern. Then, each pattern is encoded into the phase-only masks by making use of an iterative phase retrieval algorithm. These specially designed masks can be wholly or partially used in the process of computational ghost imaging to reconstruct the original information with high quality. When a significantly small number of phase-only masks are used to record the measured intensities in a single-pixel bucket detector, the information can be authenticated without clear visualization by calculating the nonlinear correlation map between the original image and its reconstruction. The results illustrate the feasibility and effectiveness of the proposed computational ghost imaging mechanism, which will provide an effective alternative for enriching the related research on the computational ghost imaging technique.

  3. Computer measurement of particle sizes in electron microscope images

    NASA Technical Reports Server (NTRS)

    Hall, E. L.; Thompson, W. B.; Varsi, G.; Gauldin, R.

    1976-01-01

    Computer image processing techniques have been applied to particle counting and sizing in electron microscope images. Distributions of particle sizes were computed for several images and compared to manually computed distributions. The results of these experiments indicate that automatic particle counting within a reasonable error and computer processing time is feasible. The significance of the results is that the tedious task of manually counting a large number of particles can be eliminated while still providing the scientist with accurate results.

  4. Scanning computed confocal imager

    DOEpatents

    George, John S.

    2000-03-14

    There is provided a confocal imager comprising a light source emitting a light, with a light modulator in optical communication with the light source for varying the spatial and temporal pattern of the light. A beam splitter receives the scanned light and direct the scanned light onto a target and pass light reflected from the target to a video capturing device for receiving the reflected light and transferring a digital image of the reflected light to a computer for creating a virtual aperture and outputting the digital image. In a transmissive mode of operation the invention omits the beam splitter means and captures light passed through the target.

  5. An integrated compact airborne multispectral imaging system using embedded computer

    NASA Astrophysics Data System (ADS)

    Zhang, Yuedong; Wang, Li; Zhang, Xuguo

    2015-08-01

    An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.

  6. A programmable computational image sensor for high-speed vision

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian

    2013-08-01

    In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.

  7. A comparison of animated versus static images in an instructional multimedia presentation.

    PubMed

    Daly, C J; Bulloch, J M; Ma, M; Aidulis, D

    2016-06-01

    Sophisticated three-dimensional animation and video compositing software enables the creation of complex multimedia instructional movies. However, if the design of such presentations does not take account of cognitive load and multimedia theories, then their effectiveness as learning aids will be compromised. We investigated the use of animated images versus still images by creating two versions of a 4-min multimedia presentation on vascular neuroeffector transmission. One version comprised narration and animations, whereas the other animation comprised narration and still images. Fifty-four undergraduate students from level 3 pharmacology and physiology undergraduate degrees participated. Half of the students watched the full animation, and the other half watched the stills only. Students watched the presentation once and then answered a short essay question. Answers were coded and marked blind. The "animation" group scored 3.7 (SE: 0.4; out of 11), whereas the "stills" group scored 3.2 (SE: 0.5). The difference was not statistically significant. Further analysis of bonus marks, awarded for appropriate terminology use, detected a significant difference in one class (pharmacology) who scored 0.6 (SE: 0.2) versus 0.1 (SE: 0.1) for the animation versus stills group, respectively (P = 0.04). However, when combined with the physiology group, the significance disappeared. Feedback from students was extremely positive and identified four main themes of interest. In conclusion, while increasing student satisfaction, we do not find strong evidence in favor of animated images over still images in this particular format. We also discuss the study design and offer suggestions for further investigations of this type. Copyright © 2016 The American Physiological Society.

  8. A survey of GPU-based medical image computing techniques

    PubMed Central

    Shi, Lin; Liu, Wen; Zhang, Heye; Xie, Yongming

    2012-01-01

    Medical imaging currently plays a crucial role throughout the entire clinical applications from medical scientific research to diagnostics and treatment planning. However, medical imaging procedures are often computationally demanding due to the large three-dimensional (3D) medical datasets to process in practical clinical applications. With the rapidly enhancing performances of graphics processors, improved programming support, and excellent price-to-performance ratio, the graphics processing unit (GPU) has emerged as a competitive parallel computing platform for computationally expensive and demanding tasks in a wide range of medical image applications. The major purpose of this survey is to provide a comprehensive reference source for the starters or researchers involved in GPU-based medical image processing. Within this survey, the continuous advancement of GPU computing is reviewed and the existing traditional applications in three areas of medical image processing, namely, segmentation, registration and visualization, are surveyed. The potential advantages and associated challenges of current GPU-based medical imaging are also discussed to inspire future applications in medicine. PMID:23256080

  9. Multispectral computational ghost imaging with multiplexed illumination

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Shi, Dongfeng

    2017-07-01

    Computational ghost imaging has attracted wide attention from researchers in many fields over the last two decades. Multispectral imaging as one application of computational ghost imaging possesses spatial and spectral resolving abilities, and is very useful for surveying scenes and extracting detailed information. Existing multispectral imagers mostly utilize narrow band filters or dispersive optical devices to separate light of different wavelengths, and then use multiple bucket detectors or an array detector to record them separately. Here, we propose a novel multispectral ghost imaging method that uses one single bucket detector with multiplexed illumination to produce a colored image. The multiplexed illumination patterns are produced by three binary encoded matrices (corresponding to the red, green and blue colored information, respectively) and random patterns. The results of the simulation and experiment have verified that our method can be effective in recovering the colored object. Multispectral images are produced simultaneously by one single-pixel detector, which significantly reduces the amount of data acquisition.

  10. System design and implementation of digital-image processing using computational grids

    NASA Astrophysics Data System (ADS)

    Shen, Zhanfeng; Luo, Jiancheng; Zhou, Chenghu; Huang, Guangyu; Ma, Weifeng; Ming, Dongping

    2005-06-01

    As a special type of digital image, remotely sensed images are playing increasingly important roles in our daily lives. Because of the enormous amounts of data involved, and the difficulties of data processing and transfer, an important issue for current computer and geo-science experts is developing internet technology to implement rapid remotely sensed image processing. Computational grids are able to solve this problem effectively. These networks of computer workstations enable the sharing of data and resources, and are used by computer experts to solve imbalances of network resources and lopsided usage. In China, computational grids combined with spatial-information-processing technology have formed a new technology: namely, spatial-information grids. In the field of remotely sensed images, spatial-information grids work more effectively for network computing, data processing, resource sharing, task cooperation and so on. This paper focuses mainly on the application of computational grids to digital-image processing. Firstly, we describe the architecture of digital-image processing on the basis of computational grids, its implementation is then discussed in detail with respect to the technology of middleware. The whole network-based intelligent image-processing system is evaluated on the basis of the experimental analysis of remotely sensed image-processing tasks; the results confirm the feasibility of the application of computational grids to digital-image processing.

  11. Advantages and disadvantages of computer imaging in cosmetic surgery.

    PubMed

    Koch, R J; Chavez, A; Dagum, P; Newman, J P

    1998-02-01

    Despite the growing popularity of computer imaging systems, it is not clear whether the medical and legal advantages of using such a system outweigh the disadvantages. The purpose of this report is to evaluate these aspects, and provide some protective guidelines in the use of computer imaging in cosmetic surgery. The positive and negative aspects of computer imaging from a medical and legal perspective are reviewed. Also, specific issues are examined by a legal panel. The greatest advantages are potential problem patient exclusion, and enhanced physician-patient communication. Disadvantages include cost, user learning curve, and potential liability. Careful use of computer imaging should actually reduce one's liability when all aspects are considered. Recommendations for such use and specific legal issues are discussed.

  12. Impacts of Digital Imaging versus Drawing on Student Learning in Undergraduate Biodiversity Labs

    ERIC Educational Resources Information Center

    Basey, John M.; Maines, Anastasia P.; Francis, Clinton D.; Melbourne, Brett

    2014-01-01

    We examined the effects of documenting observations with digital imaging versus hand drawing in inquiry-based college biodiversity labs. Plant biodiversity labs were divided into two treatments, digital imaging (N = 221) and hand drawing (N = 238). Graduate-student teaching assistants (N = 24) taught one class in each treatment. Assessments…

  13. Computational ghost imaging using deep learning

    NASA Astrophysics Data System (ADS)

    Shimobaba, Tomoyoshi; Endo, Yutaka; Nishitsuji, Takashi; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Shiraki, Atsushi; Ito, Tomoyoshi

    2018-04-01

    Computational ghost imaging (CGI) is a single-pixel imaging technique that exploits the correlation between known random patterns and the measured intensity of light transmitted (or reflected) by an object. Although CGI can obtain two- or three-dimensional images with a single or a few bucket detectors, the quality of the reconstructed images is reduced by noise due to the reconstruction of images from random patterns. In this study, we improve the quality of CGI images using deep learning. A deep neural network is used to automatically learn the features of noise-contaminated CGI images. After training, the network is able to predict low-noise images from new noise-contaminated CGI images.

  14. Comparison of computer systems and ranking criteria for automatic melanoma detection in dermoscopic images.

    PubMed

    Møllersen, Kajsa; Zortea, Maciel; Schopf, Thomas R; Kirchesch, Herbert; Godtliebsen, Fred

    2017-01-01

    Melanoma is the deadliest form of skin cancer, and early detection is crucial for patient survival. Computer systems can assist in melanoma detection, but are not widespread in clinical practice. In 2016, an open challenge in classification of dermoscopic images of skin lesions was announced. A training set of 900 images with corresponding class labels and semi-automatic/manual segmentation masks was released for the challenge. An independent test set of 379 images, of which 75 were of melanomas, was used to rank the participants. This article demonstrates the impact of ranking criteria, segmentation method and classifier, and highlights the clinical perspective. We compare five different measures for diagnostic accuracy by analysing the resulting ranking of the computer systems in the challenge. Choice of performance measure had great impact on the ranking. Systems that were ranked among the top three for one measure, dropped to the bottom half when changing performance measure. Nevus Doctor, a computer system previously developed by the authors, was used to participate in the challenge, and investigate the impact of segmentation and classifier. The diagnostic accuracy when using an automatic versus the semi-automatic/manual segmentation is investigated. The unexpected small impact of segmentation method suggests that improvements of the automatic segmentation method w.r.t. resemblance to semi-automatic/manual segmentation will not improve diagnostic accuracy substantially. A small set of similar classification algorithms are used to investigate the impact of classifier on the diagnostic accuracy. The variability in diagnostic accuracy for different classifier algorithms was larger than the variability for segmentation methods, and suggests a focus for future investigations. From a clinical perspective, the misclassification of a melanoma as benign has far greater cost than the misclassification of a benign lesion. For computer systems to have clinical impact

  15. Computational Imaging in Demanding Conditions

    DTIC Science & Technology

    2015-11-18

    spatiotemporal domain where such blur is not present.  Detailed Accomplishments:  ● Removing  Atmospheric   Turbulence  via Space-Invariant  Deconvolution:  ○ To...given image sequence distorted by  atmospheric   turbulence . This approach  reduces the space and time-varying deblurring problem to a shift invariant...SUBJECT TERMS Image processing, Computational imaging, turbulence , blur, enhancement 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18

  16. System Optimization and Iterative Image Reconstruction in Photoacoustic Computed Tomography for Breast Imaging

    NASA Astrophysics Data System (ADS)

    Lou, Yang

    Photoacoustic computed tomography(PACT), also known as optoacoustic tomography (OAT), is an emerging imaging technique that has developed rapidly in recent years. The combination of the high optical contrast and the high acoustic resolution of this hybrid imaging technique makes it a promising candidate for human breast imaging, where conventional imaging techniques including X-ray mammography, B-mode ultrasound, and MRI suffer from low contrast, low specificity for certain breast types, and additional risks related to ionizing radiation. Though significant works have been done to push the frontier of PACT breast imaging, it is still challenging to successfully build a PACT breast imaging system and apply it to wide clinical use because of various practical reasons. First, computer simulation studies are often conducted to guide imaging system designs, but the numerical phantoms employed in most previous works consist of simple geometries and do not reflect the true anatomical structures within the breast. Therefore the effectiveness of such simulation-guided PACT system in clinical experiments will be compromised. Second, it is challenging to design a system to simultaneously illuminate the entire breast with limited laser power. Some heuristic designs have been proposed where the illumination is non-stationary during the imaging procedure, but the impact of employing such a design has not been carefully studied. Third, current PACT imaging systems are often optimized with respect to physical measures such as resolution or signal-to-noise ratio (SNR). It would be desirable to establish an assessing framework where the detectability of breast tumor can be directly quantified, therefore the images produced by such optimized imaging systems are not only visually appealing, but most informative in terms of the tumor detection task. Fourth, when imaging a large three-dimensional (3D) object such as the breast, iterative reconstruction algorithms are often utilized to

  17. Computers in Public Schools: Changing the Image with Image Processing.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  18. Multimedia Image Technology and Computer Aided Manufacturing Engineering Analysis

    NASA Astrophysics Data System (ADS)

    Nan, Song

    2018-03-01

    Since the reform and opening up, with the continuous development of science and technology in China, more and more advanced science and technology have emerged under the trend of diversification. Multimedia imaging technology, for example, has a significant and positive impact on computer aided manufacturing engineering in China. From the perspective of scientific and technological advancement and development, the multimedia image technology has a very positive influence on the application and development of computer-aided manufacturing engineering, whether in function or function play. Therefore, this paper mainly starts from the concept of multimedia image technology to analyze the application of multimedia image technology in computer aided manufacturing engineering.

  19. Identifying Student Use of Ball-and-Stick Images versus Electrostatic Potential Map Images via Eye Tracking

    ERIC Educational Resources Information Center

    Williamson, Vickie M.; Hegarty, Mary; Deslongchamps, Ghislain; Williamson, Kenneth C., III

    2013-01-01

    This pilot study examined students' use of ball-and-stick images versus electrostatic potential maps when asked questions about electron density, positive charge, proton attack, and hydroxide attack with six different molecules (two alcohols, two carboxylic acids, and two hydroxycarboxylic acids). Students' viewing of these dual images…

  20. Computer-assisted versus conventional free fibula flap technique for craniofacial reconstruction: an outcomes comparison.

    PubMed

    Seruya, Mitchel; Fisher, Mark; Rodriguez, Eduardo D

    2013-11-01

    There has been rising interest in computer-aided design/computer-aided manufacturing for preoperative planning and execution of osseous free flap reconstruction. The purpose of this study was to compare outcomes between computer-assisted and conventional fibula free flap techniques for craniofacial reconstruction. A two-center, retrospective review was carried out on patients who underwent fibula free flap surgery for craniofacial reconstruction from 2003 to 2012. Patients were categorized by the type of reconstructive technique: conventional (between 2003 and 2009) or computer-aided design/computer-aided manufacturing (from 2010 to 2012). Demographics, surgical factors, and perioperative and long-term outcomes were compared. A total of 68 patients underwent microsurgical craniofacial reconstruction: 58 conventional and 10 computer-aided design and manufacturing fibula free flaps. By demographics, patients undergoing the computer-aided design/computer-aided manufacturing method were significantly older and had a higher rate of radiotherapy exposure compared with conventional patients. Intraoperatively, the median number of osteotomies was significantly higher (2.0 versus 1.0, p=0.002) and the median ischemia time was significantly shorter (120 minutes versus 170 minutes, p=0.004) for the computer-aided design/computer-aided manufacturing technique compared with conventional techniques; operative times were shorter for patients undergoing the computer-aided design/computer-aided manufacturing technique, although this did not reach statistical significance. Perioperative and long-term outcomes were equivalent for the two groups, notably, hospital length of stay, recipient-site infection, partial and total flap loss, and rate of soft-tissue and bony tissue revisions. Microsurgical craniofacial reconstruction using a computer-assisted fibula flap technique yielded significantly shorter ischemia times amidst a higher number of osteotomies compared with conventional

  1. [A computer-aided image diagnosis and study system].

    PubMed

    Li, Zhangyong; Xie, Zhengxiang

    2004-08-01

    The revolution in information processing, particularly the digitizing of medicine, has changed the medical study, work and management. This paper reports a method to design a system for computer-aided image diagnosis and study. Combined with some good idea of graph-text system and picture archives communicate system (PACS), the system was realized and used for "prescription through computer", "managing images" and "reading images under computer and helping the diagnosis". Also typical examples were constructed in a database and used to teach the beginners. The system was developed by the visual developing tools based on object oriented programming (OOP) and was carried into operation on the Windows 9X platform. The system possesses friendly man-machine interface.

  2. Comparison of two computer codes for crack growth analysis: NASCRAC Versus NASA/FLAGRO

    NASA Technical Reports Server (NTRS)

    Stallworth, R.; Meyers, C. A.; Stinson, H. C.

    1989-01-01

    Results are presented from the comparison study of two computer codes for crack growth analysis - NASCRAC and NASA/FLAGRO. The two computer codes gave compatible conservative results when the part through crack analysis solutions were analyzed versus experimental test data. Results showed good correlation between the codes for the through crack at a lug solution. For the through crack at a lug solution, NASA/FLAGRO gave the most conservative results.

  3. Image matrix processor for fast multi-dimensional computations

    DOEpatents

    Roberson, G.P.; Skeate, M.F.

    1996-10-15

    An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.

  4. Image matrix processor for fast multi-dimensional computations

    DOEpatents

    Roberson, George P.; Skeate, Michael F.

    1996-01-01

    An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.

  5. DWI-ASPECTS (Diffusion-Weighted Imaging-Alberta Stroke Program Early Computed Tomography Scores) and DWI-FLAIR (Diffusion-Weighted Imaging-Fluid Attenuated Inversion Recovery) Mismatch in Thrombectomy Candidates: An Intrarater and Interrater Agreement Study.

    PubMed

    Fahed, Robert; Lecler, Augustin; Sabben, Candice; Khoury, Naim; Ducroux, Célina; Chalumeau, Vanessa; Botta, Daniele; Kalsoum, Erwah; Boisseau, William; Duron, Loïc; Cabral, Dominique; Koskas, Patricia; Benaïssa, Azzedine; Koulakian, Hasmik; Obadia, Michael; Maïer, Benjamin; Weisenburger-Lile, David; Lapergue, Bertrand; Wang, Adrien; Redjem, Hocine; Ciccio, Gabriele; Smajda, Stanislas; Desilles, Jean-Philippe; Mazighi, Mikaël; Ben Maacha, Malek; Akkari, Inès; Zuber, Kevin; Blanc, Raphaël; Raymond, Jean; Piotin, Michel

    2018-01-01

    We aimed to study the intrarater and interrater agreement of clinicians attributing DWI-ASPECTS (Diffusion-Weighted Imaging-Alberta Stroke Program Early Computed Tomography Scores) and DWI-FLAIR (Diffusion-Weighted Imaging-Fluid Attenuated Inversion Recovery) mismatch in patients with acute ischemic stroke referred for mechanical thrombectomy. Eighteen raters independently scored anonymized magnetic resonance imaging scans of 30 participants from a multicentre thrombectomy trial, in 2 different reading sessions. Agreement was measured using Fleiss κ and Cohen κ statistics. Interrater agreement for DWI-ASPECTS was slight (κ=0.17 [0.14-0.21]). Four raters (22.2%) had a substantial (or higher) intrarater agreement. Dichotomization of the DWI-ASPECTS (0-5 versus 6-10 or 0-6 versus 7-10) increased the interrater agreement to a substantial level (κ=0.62 [0.48-0.75] and 0.68 [0.55-0.79], respectively) and more raters reached a substantial (or higher) intrarater agreement (17/18 raters [94.4%]). Interrater agreement for DWI-FLAIR mismatch was moderate (κ=0.43 [0.33-0.57]); 11 raters (61.1%) reached a substantial (or higher) intrarater agreement. Agreement between clinicians assessing DWI-ASPECTS and DWI-FLAIR mismatch may not be sufficient to make repeatable clinical decisions in mechanical thrombectomy. The dichotomization of the DWI-ASPECTS (0-5 versus 0-6 or 0-6 versus 7-10) improved interrater and intrarater agreement, however, its relevance for patients selection for mechanical thrombectomy needs to be validated in a randomized trial. © 2017 American Heart Association, Inc.

  6. Efficacy of Stent-Retriever Thrombectomy in Magnetic Resonance Imaging Versus Computed Tomographic Perfusion-Selected Patients in SWIFT PRIME Trial (Solitaire FR With the Intention for Thrombectomy as Primary Endovascular Treatment for Acute Ischemic Stroke).

    PubMed

    Menjot de Champfleur, Nicolas; Saver, Jeffrey L; Goyal, Mayank; Jahan, Reza; Diener, Hans-Christoph; Bonafe, Alain; Levy, Elad I; Pereira, Vitor M; Cognard, Christophe; Yavagal, Dileep R; Albers, Gregory W

    2017-06-01

    The majority of patients enrolled in SWIFT PRIME trial (Solitaire FR With the Intention for Thrombectomy as Primary Endovascular Treatment for Acute Ischemic Stroke) had computed tomographic perfusion (CTP) imaging before randomization; 34 patients were randomized after magnetic resonance imaging (MRI). Patients with middle cerebral artery and distal carotid occlusions were randomized to treatment with tPA (tissue-type plasminogen activator) alone or tPA+stentriever thrombectomy. The primary outcome was the distribution of the modified Rankin Scale score at 90 days. Patients with the target mismatch profile for enrollment were identified on MRI and CTP. MRI selection was performed in 34 patients; CTP in 139 patients. Baseline National Institutes of Health Stroke Scale score was 17 in both groups. Target mismatch profile was present in 95% (MRI) versus 83% (CTP). A higher percentage of the MRI group was transferred from an outside hospital ( P =0.02), and therefore, the time from stroke onset to randomization was longer in the MRI group ( P =0.003). Time from emergency room arrival to randomization did not differ in CTP versus MRI-selected patients. Baseline ischemic core volumes were similar in both groups. Reperfusion rates (>90%/TICI [Thrombolysis in Cerebral Infarction] score 3) did not differ in the stentriever-treated patients in the MRI versus CTP groups. The primary efficacy analysis (90-day mRS score) demonstrated a statistically significant benefit in both subgroups (MRI, P =0.02; CTP, P =0.01). Infarct growth was reduced in the stentriever-treated group in both MRI and CTP groups. Time to randomization was significantly longer in MRI-selected patients; however, site arrival to randomization times were not prolonged, and the benefits of endovascular therapy were similar. URL: http://www.clinicaltrials.gov. Unique identifier: NCT01657461. © 2017 American Heart Association, Inc.

  7. Standard and reduced radiation dose liver CT images: adaptive statistical iterative reconstruction versus model-based iterative reconstruction-comparison of findings and image quality.

    PubMed

    Shuman, William P; Chan, Keith T; Busey, Janet M; Mitsumori, Lee M; Choi, Eunice; Koprowicz, Kent M; Kanal, Kalpana M

    2014-12-01

    To investigate whether reduced radiation dose liver computed tomography (CT) images reconstructed with model-based iterative reconstruction ( MBIR model-based iterative reconstruction ) might compromise depiction of clinically relevant findings or might have decreased image quality when compared with clinical standard radiation dose CT images reconstructed with adaptive statistical iterative reconstruction ( ASIR adaptive statistical iterative reconstruction ). With institutional review board approval, informed consent, and HIPAA compliance, 50 patients (39 men, 11 women) were prospectively included who underwent liver CT. After a portal venous pass with ASIR adaptive statistical iterative reconstruction images, a 60% reduced radiation dose pass was added with MBIR model-based iterative reconstruction images. One reviewer scored ASIR adaptive statistical iterative reconstruction image quality and marked findings. Two additional independent reviewers noted whether marked findings were present on MBIR model-based iterative reconstruction images and assigned scores for relative conspicuity, spatial resolution, image noise, and image quality. Liver and aorta Hounsfield units and image noise were measured. Volume CT dose index and size-specific dose estimate ( SSDE size-specific dose estimate ) were recorded. Qualitative reviewer scores were summarized. Formal statistical inference for signal-to-noise ratio ( SNR signal-to-noise ratio ), contrast-to-noise ratio ( CNR contrast-to-noise ratio ), volume CT dose index, and SSDE size-specific dose estimate was made (paired t tests), with Bonferroni adjustment. Two independent reviewers identified all 136 ASIR adaptive statistical iterative reconstruction image findings (n = 272) on MBIR model-based iterative reconstruction images, scoring them as equal or better for conspicuity, spatial resolution, and image noise in 94.1% (256 of 272), 96.7% (263 of 272), and 99.3% (270 of 272), respectively. In 50 image sets, two reviewers

  8. Fast computational scheme of image compression for 32-bit microprocessors

    NASA Technical Reports Server (NTRS)

    Kasperovich, Leonid

    1994-01-01

    This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.

  9. Computational multispectral video imaging [Invited].

    PubMed

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  10. Geometric Computation of Human Gyrification Indexes from Magnetic Resonance Images

    DTIC Science & Technology

    2009-04-01

    GEOMETRIC COMPUTATION OF HUMAN GYRIFICATION INDEXES FROM MAGNETIC RESONANCE IMAGES By Shu Su Tonya White Marcus Schmidt Chiu-Yen Kao and Guillermo...00-2009 to 00-00-2009 4. TITLE AND SUBTITLE Geometric Computation of Human Gyrification Indexes from Magnetic Resonance Images 5a. CONTRACT NUMBER... Geometric Computation of Gyrification Indexes Chiu-Yen Kao 1 Geometric Computation of Human Gyrification

  11. Prospective Computer Teachers' Mental Images about the Concepts of "School" and "Computer Teacher"

    ERIC Educational Resources Information Center

    Saban, Aslihan

    2011-01-01

    In this phenomenological study, prospective computer teachers' mental images related to the concepts of "school" and "computer teacher" were examined through metaphors. Participants were all the 45 seniors majoring in the Department of Computer and Instructional Technologies at Selcuk University, Ahmet Kelesoglu Faculty of…

  12. Unconventional methods of imaging: computational microscopy and compact implementations

    NASA Astrophysics Data System (ADS)

    McLeod, Euan; Ozcan, Aydogan

    2016-07-01

    In the past two decades or so, there has been a renaissance of optical microscopy research and development. Much work has been done in an effort to improve the resolution and sensitivity of microscopes, while at the same time to introduce new imaging modalities, and make existing imaging systems more efficient and more accessible. In this review, we look at two particular aspects of this renaissance: computational imaging techniques and compact imaging platforms. In many cases, these aspects go hand-in-hand because the use of computational techniques can simplify the demands placed on optical hardware in obtaining a desired imaging performance. In the first main section, we cover lens-based computational imaging, in particular, light-field microscopy, structured illumination, synthetic aperture, Fourier ptychography, and compressive imaging. In the second main section, we review lensfree holographic on-chip imaging, including how images are reconstructed, phase recovery techniques, and integration with smart substrates for more advanced imaging tasks. In the third main section we describe how these and other microscopy modalities have been implemented in compact and field-portable devices, often based around smartphones. Finally, we conclude with some comments about opportunities and demand for better results, and where we believe the field is heading.

  13. Separation of electron and hole dynamics in the semimetal LaSb

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, F.; Xu, J.; Botana, A. S.

    We report investigations on the magnetotransport in LaSb, which exhibits extremely large magnetoresistance (XMR). Foremost, we demonstrate that the resistivity plateau can be explained without invoking topological protection. We then determine the Fermi surface from Shubnikov–de Haas (SdH) quantum oscillation measurements and find good agreement with the bulk Fermi pockets derived from first-principles calculations. Using a semiclassical theory and the experimentally determined Fermi pocket anisotropies, we quantitatively describe the orbital magnetoresistance, including its angle dependence.We show that the origin of XMR in LaSb lies in its high mobility with diminishing Hall effect, where the high mobility leads to a strongmore » magnetic-field dependence of the longitudinal magnetoconductance. Unlike a one-band material, when a system has two or more bands (Fermi pockets) with electron and hole carriers, the added conductance arising from the Hall effect is reduced, hence revealing the latent XMR enabled by the longitudinal magnetoconductance. With diminishing Hall effect, the magnetoresistivity is simply the inverse of the longitudinal magnetoconductivity, enabling the differentiation of the electron and hole contributions to the XMR, which varies with the strength and orientation of the magnetic field. This work demonstrates a convenient way to separate the dynamics of the charge carriers and to uncover the origin of XMR in multiband materials with anisotropic Fermi surfaces. Our approach can be readily applied to other XMR materials.« less

  14. GRAFT-VERSUS-HOST DISEASE PANUVEITIS AND BILATERAL SEROUS DETACHMENTS: MULTIMODAL IMAGING ANALYSIS.

    PubMed

    Jung, Jesse J; Chen, Michael H; Rofagha, Soraya; Lee, Scott S

    2017-01-01

    To report the multimodal imaging findings and follow-up of a case of graft-versus-host disease-induced bilateral panuveitis and serous retinal detachments after allogenic bone marrow transplant for acute myeloid leukemia. A 75-year-old black man presented with acute decreased vision in both eyes for 1 week. Clinical examination and multimodal imaging, including spectral domain optical coherence tomography, fundus autofluorescence, fluorescein angiography, and swept-source optical coherence tomography angiography (Investigational Device; Carl Zeiss Meditec Inc) were performed. Clinical examination of the patient revealed anterior and posterior inflammation and bilateral serous retinal detachments. Ultra-widefield fundus autofluorescence demonstrated hyperautofluorescence secondary to subretinal fluid; and fluorescein angiography revealed multiple areas of punctate hyperfluorescence, leakage, and staining of the optic discs. Spectral domain and enhanced depth imaging optical coherence tomography demonstrated subretinal fluid, a thickened, undulating retinal pigment epithelium layer, and a thickened choroid in both eyes. En-face swept-source optical coherence tomography angiography did not show any retinal vascular abnormalities but did demonstrate patchy areas of decreased choriocapillaris flow. An extensive systemic infectious and malignancy workup was negative and the patient was treated with high-dose oral prednisone immunosuppression. Subsequent 6-month follow-up demonstrated complete resolution of the inflammation and bilateral serous detachments after completion of the prednisone taper over a 3-month period. Graft-versus-host disease panuveitis and bilateral serous retinal detachments are rare complications of allogenic bone marrow transplant for acute myeloid leukemia and can be diagnosed with clinical and multimodal imaging analysis. This form of autoimmune inflammation may occur after the recovery of T-cell activity within the donor graft targeting the host

  15. Spatial image modulation to improve performance of computed tomography imaging spectrometer

    NASA Technical Reports Server (NTRS)

    Bearman, Gregory H. (Inventor); Wilson, Daniel W. (Inventor); Johnson, William R. (Inventor)

    2010-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having patterns for imposing spatial structure are provided. The pattern may be imposed either directly on the object scene being imaged or at the field stop aperture. The use of the pattern improves the accuracy of the captured spatial and spectral information.

  16. Review methods for image segmentation from computed tomography images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affectmore » the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.« less

  17. Comparison and assessment of semi-automatic image segmentation in computed tomography scans for image-guided kidney surgery.

    PubMed

    Glisson, Courtenay L; Altamar, Hernan O; Herrell, S Duke; Clark, Peter; Galloway, Robert L

    2011-11-01

    Image segmentation is integral to implementing intraoperative guidance for kidney tumor resection. Results seen in computed tomography (CT) data are affected by target organ physiology as well as by the segmentation algorithm used. This work studies variables involved in using level set methods found in the Insight Toolkit to segment kidneys from CT scans and applies the results to an image guidance setting. A composite algorithm drawing on the strengths of multiple level set approaches was built using the Insight Toolkit. This algorithm requires image contrast state and seed points to be identified as input, and functions independently thereafter, selecting and altering method and variable choice as needed. Semi-automatic results were compared to expert hand segmentation results directly and by the use of the resultant surfaces for registration of intraoperative data. Direct comparison using the Dice metric showed average agreement of 0.93 between semi-automatic and hand segmentation results. Use of the segmented surfaces in closest point registration of intraoperative laser range scan data yielded average closest point distances of approximately 1 mm. Application of both inverse registration transforms from the previous step to all hand segmented image space points revealed that the distance variability introduced by registering to the semi-automatically segmented surface versus the hand segmented surface was typically less than 3 mm both near the tumor target and at distal points, including subsurface points. Use of the algorithm shortened user interaction time and provided results which were comparable to the gold standard of hand segmentation. Further, the use of the algorithm's resultant surfaces in image registration provided comparable transformations to surfaces produced by hand segmentation. These data support the applicability and utility of such an algorithm as part of an image guidance workflow.

  18. Comprehensive Computational Pathological Image Analysis Predicts Lung Cancer Prognosis.

    PubMed

    Luo, Xin; Zang, Xiao; Yang, Lin; Huang, Junzhou; Liang, Faming; Rodriguez-Canales, Jaime; Wistuba, Ignacio I; Gazdar, Adi; Xie, Yang; Xiao, Guanghua

    2017-03-01

    Pathological examination of histopathological slides is a routine clinical procedure for lung cancer diagnosis and prognosis. Although the classification of lung cancer has been updated to become more specific, only a small subset of the total morphological features are taken into consideration. The vast majority of the detailed morphological features of tumor tissues, particularly tumor cells' surrounding microenvironment, are not fully analyzed. The heterogeneity of tumor cells and close interactions between tumor cells and their microenvironments are closely related to tumor development and progression. The goal of this study is to develop morphological feature-based prediction models for the prognosis of patients with lung cancer. We developed objective and quantitative computational approaches to analyze the morphological features of pathological images for patients with NSCLC. Tissue pathological images were analyzed for 523 patients with adenocarcinoma (ADC) and 511 patients with squamous cell carcinoma (SCC) from The Cancer Genome Atlas lung cancer cohorts. The features extracted from the pathological images were used to develop statistical models that predict patients' survival outcomes in ADC and SCC, respectively. We extracted 943 morphological features from pathological images of hematoxylin and eosin-stained tissue and identified morphological features that are significantly associated with prognosis in ADC and SCC, respectively. Statistical models based on these extracted features stratified NSCLC patients into high-risk and low-risk groups. The models were developed from training sets and validated in independent testing sets: a predicted high-risk group versus a predicted low-risk group (for patients with ADC: hazard ratio = 2.34, 95% confidence interval: 1.12-4.91, p = 0.024; for patients with SCC: hazard ratio = 2.22, 95% confidence interval: 1.15-4.27, p = 0.017) after adjustment for age, sex, smoking status, and pathologic tumor stage. The

  19. Texture classification of lung computed tomography images

    NASA Astrophysics Data System (ADS)

    Pheng, Hang See; Shamsuddin, Siti M.

    2013-03-01

    Current development of algorithms in computer-aided diagnosis (CAD) scheme is growing rapidly to assist the radiologist in medical image interpretation. Texture analysis of computed tomography (CT) scans is one of important preliminary stage in the computerized detection system and classification for lung cancer. Among different types of images features analysis, Haralick texture with variety of statistical measures has been used widely in image texture description. The extraction of texture feature values is essential to be used by a CAD especially in classification of the normal and abnormal tissue on the cross sectional CT images. This paper aims to compare experimental results using texture extraction and different machine leaning methods in the classification normal and abnormal tissues through lung CT images. The machine learning methods involve in this assessment are Artificial Immune Recognition System (AIRS), Naive Bayes, Decision Tree (J48) and Backpropagation Neural Network. AIRS is found to provide high accuracy (99.2%) and sensitivity (98.0%) in the assessment. For experiments and testing purpose, publicly available datasets in the Reference Image Database to Evaluate Therapy Response (RIDER) are used as study cases.

  20. Computer-aided classification of lung nodules on computed tomography images via deep learning technique

    PubMed Central

    Hua, Kai-Lung; Hsu, Che-Hao; Hidayati, Shintami Chusnul; Cheng, Wen-Huang; Chen, Yu-Jen

    2015-01-01

    Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD) scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain. PMID:26346558

  1. Computer-aided classification of lung nodules on computed tomography images via deep learning technique.

    PubMed

    Hua, Kai-Lung; Hsu, Che-Hao; Hidayati, Shintami Chusnul; Cheng, Wen-Huang; Chen, Yu-Jen

    2015-01-01

    Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD) scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain.

  2. Synchrotron Imaging Computations on the Grid without the Computing Element

    NASA Astrophysics Data System (ADS)

    Curri, A.; Pugliese, R.; Borghes, R.; Kourousias, G.

    2011-12-01

    Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.

  3. Images as drivers of progress in cardiac computational modelling

    PubMed Central

    Lamata, Pablo; Casero, RamĂłn; Carapella, Valentina; Niederer, Steve A.; Bishop, Martin J.; Schneider, JĂĽrgen E.; Kohl, Peter; Grau, Vicente

    2014-01-01

    Computational models have become a fundamental tool in cardiac research. Models are evolving to cover multiple scales and physical mechanisms. They are moving towards mechanistic descriptions of personalised structure and function, including effects of natural variability. These developments are underpinned to a large extent by advances in imaging technologies. This article reviews how novel imaging technologies, or the innovative use and extension of established ones, integrate with computational models and drive novel insights into cardiac biophysics. In terms of structural characterization, we discuss how imaging is allowing a wide range of scales to be considered, from cellular levels to whole organs. We analyse how the evolution from structural to functional imaging is opening new avenues for computational models, and in this respect we review methods for measurement of electrical activity, mechanics and flow. Finally, we consider ways in which combined imaging and modelling research is likely to continue advancing cardiac research, and identify some of the main challenges that remain to be solved. PMID:25117497

  4. Texture functions in image analysis: A computationally efficient solution

    NASA Technical Reports Server (NTRS)

    Cox, S. C.; Rose, J. F.

    1983-01-01

    A computationally efficient means for calculating texture measurements from digital images by use of the co-occurrence technique is presented. The calculation of the statistical descriptors of image texture and a solution that circumvents the need for calculating and storing a co-occurrence matrix are discussed. The results show that existing efficient algorithms for calculating sums, sums of squares, and cross products can be used to compute complex co-occurrence relationships directly from the digital image input.

  5. Image-guided system versus manual marking for toric intraocular lens alignment in cataract surgery.

    PubMed

    Webers, Valentijn S C; Bauer, Noel J C; Visser, Nienke; Berendschot, Tos T J M; van den Biggelaar, Frank J H M; Nuijts, Rudy M M A

    2017-06-01

    To compare the accuracy of toric intraocular lens (IOL) alignment using the Verion Image-Guided System versus a conventional manual ink-marking procedure. University Eye Clinic Maastricht, Maastricht, the Netherlands. Prospective randomized clinical trial. Eyes with regular corneal astigmatism of at least 1.25 diopters (D) that required cataract surgery and toric IOL implantation (Acrysof SN6AT3-T9) were randomly assigned to the image-guided group or the manual-marking group. The primary outcome was the alignment of the toric IOL based on preoperative images and images taken immediately after surgery. Secondary outcome measures were residual astigmatism, uncorrected distance visual acuity (UDVA), and complications. The study enrolled 36 eyes (24 patients). The mean toric IOL misalignment was significantly less in the image-guided group than in the manual group 1 hour (1.3 degrees ± 1.6 [SD] versus 2.8 ± 1.8 degrees; P = .02) and 3 months (1.7 ± 1.5 degrees versus 3.1 ± 2.1 degrees; P < .05) postoperatively. The mean residual refractive cylinder was -0.36 ± 0.32 D and -0.47 ± 0.28 D in the image-guided group and manual group, respectively (P > .05). The mean UDVA was 0.03 ± 0.10 logarithm of minimum angle of resolution (logMAR) and 0.04 ± 0.09 logMAR, respectively (both P > .05). No intraoperative complications occurred during any surgery. The IOL misalignment was significantly less with digital marking than with manual marking; this did not result in a better UDVA or lower residual refractive astigmatism. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  6. Social computing for image matching

    PubMed Central

    Rivas, Alberto; Sánchez-Torres, Ramiro; Rodríguez, Sara

    2018-01-01

    One of the main technological trends in the last five years is mass data analysis. This trend is due in part to the emergence of concepts such as social networks, which generate a large volume of data that can provide added value through their analysis. This article is focused on a business and employment-oriented social network. More specifically, it focuses on the analysis of information provided by different users in image form. The images are analyzed to detect whether other existing users have posted or talked about the same image, even if the image has undergone some type of modification such as watermarks or color filters. This makes it possible to establish new connections among unknown users by detecting what they are posting or whether they are talking about the same images. The proposed solution consists of an image matching algorithm, which is based on the rapid calculation and comparison of hashes. However, there is a computationally expensive aspect in charge of revoking possible image transformations. As a result, the image matching process is supported by a distributed forecasting system that enables or disables nodes to serve all the possible requests. The proposed system has shown promising results for matching modified images, especially when compared with other existing systems. PMID:29813082

  7. Female Students' Experiences of Computer Technology in Single- versus Mixed-Gender School Settings

    ERIC Educational Resources Information Center

    Burke, Lee-Ann; Murphy, Elizabeth

    2006-01-01

    This study explores how female students compare learning computer technology in a single- versus a mixed- gender school setting. Twelve females participated, all of whom were enrolled in a grade 12 course in Communications' Technology. Data collection included a questionnaire, a semi-structured interview and focus groups. Participants described…

  8. FAST: framework for heterogeneous medical image computing and visualization.

    PubMed

    Smistad, Erik; Bozorgi, Mohammadmehdi; Lindseth, Frank

    2015-11-01

    Computer systems are becoming increasingly heterogeneous in the sense that they consist of different processors, such as multi-core CPUs and graphic processing units. As the amount of medical image data increases, it is crucial to exploit the computational power of these processors. However, this is currently difficult due to several factors, such as driver errors, processor differences, and the need for low-level memory handling. This paper presents a novel FrAmework for heterogeneouS medical image compuTing and visualization (FAST). The framework aims to make it easier to simultaneously process and visualize medical images efficiently on heterogeneous systems. FAST uses common image processing programming paradigms and hides the details of memory handling from the user, while enabling the use of all processors and cores on a system. The framework is open-source, cross-platform and available online. Code examples and performance measurements are presented to show the simplicity and efficiency of FAST. The results are compared to the insight toolkit (ITK) and the visualization toolkit (VTK) and show that the presented framework is faster with up to 20 times speedup on several common medical imaging algorithms. FAST enables efficient medical image computing and visualization on heterogeneous systems. Code examples and performance evaluations have demonstrated that the toolkit is both easy to use and performs better than existing frameworks, such as ITK and VTK.

  9. Rib Radiography versus Chest Computed Tomography in the Diagnosis of Rib Fractures.

    PubMed

    Sano, Atsushi

    2018-05-01

    â€The accurate diagnosis of rib fractures is important in chest trauma. Diagnostic images following chest trauma are usually obtained via chest X-ray, chest computed tomography, or rib radiography. This study evaluated the diagnostic characteristics of rib radiography and chest computed tomography. â€Seventy-five rib fracture patients who underwent both chest computed tomography and rib radiography between April 2008 and December 2013 were included. Rib radiographs, centered on the site of pain, were taken from two directions. Chest computed tomography was performed using a 16-row multidetector scanner with 5-mm slice-pitch without overlap, and axial images were visualized in a bone window. â€In total, 217 rib fractures were diagnosed in 75 patients. Rib radiography missed 43 rib fractures in 24 patients. The causes were overlap with organs in 15 cases, trivial fractures in 21 cases, and injury outside the imaging range in 7 cases. Left lower rib fractures were often missed due to overlap with the heart, while middle and lower rib fractures were frequently not diagnosed due to overlap with abdominal organs. Computed tomography missed 21 rib fractures in 17 patients. The causes were horizontal fractures in 10 cases, trivial fractures in 9 cases, and insufficient breath holding in 1 case. â€In rib radiography, overlap with organs and fractures outside the imaging range were characteristic reasons for missed diagnoses. In chest computed tomography, horizontal rib fractures and insufficient breath holding were often responsible. We should take these challenges into account when diagnosing rib fractures. Georg Thieme Verlag KG Stuttgart · New York.

  10. Tse computers. [Chinese pictograph character binary image processor design for high speed applications

    NASA Technical Reports Server (NTRS)

    Strong, J. P., III

    1973-01-01

    Tse computers have the potential of operating four or five orders of magnitude faster than present digital computers. The computers of the new design use binary images as their basic computational entity. The word 'tse' is the transliteration of the Chinese word for 'pictograph character.' Tse computers are large collections of devices that perform logical operations on binary images. The operations on binary images are to be performed over the entire image simultaneously.

  11. Computer-Assisted Classification Patterns in Autoimmune Diagnostics: The AIDA Project

    PubMed Central

    Benammar Elgaaied, Amel; Cascio, Donato; Bruno, Salvatore; Ciaccio, Maria Cristina; Cipolla, Marco; Fauci, Alessandro; Morgante, Rossella; Taormina, Vincenzo; Gorgi, Yousr; Marrakchi Triki, Raja; Ben Ahmed, Melika; Louzir, Hechmi; Yalaoui, Sadok; Imene, Sfar; Issaoui, Yassine; Abidi, Ahmed; Ammar, Myriam; Bedhiafi, Walid; Ben Fraj, Oussama; Bouhaha, Rym; Hamdi, Khouloud; Soumaya, Koudhi; Neili, Bilel; Asma, Gati; Lucchese, Mariano; Catanzaro, Maria; Barbara, Vincenza; Brusca, Ignazio; Fregapane, Maria; Amato, Gaetano; Friscia, Giuseppe; Neila, Trai; Turkia, Souayeh; Youssra, Haouami; Rekik, Raja; Bouokez, Hayet; Vasile Simone, Maria; Fauci, Francesco; Raso, Giuseppe

    2016-01-01

    Antinuclear antibodies (ANAs) are significant biomarkers in the diagnosis of autoimmune diseases in humans, done by mean of Indirect ImmunoFluorescence (IIF) method, and performed by analyzing patterns and fluorescence intensity. This paper introduces the AIDA Project (autoimmunity: diagnosis assisted by computer) developed in the framework of an Italy-Tunisia cross-border cooperation and its preliminary results. A database of interpreted IIF images is being collected through the exchange of images and double reporting and a Gold Standard database, containing around 1000 double reported images, has been settled. The Gold Standard database is used for optimization of a CAD (Computer Aided Detection) solution and for the assessment of its added value, in order to be applied along with an Immunologist as a second Reader in detection of autoantibodies. This CAD system is able to identify on IIF images the fluorescence intensity and the fluorescence pattern. Preliminary results show that CAD, used as second Reader, appeared to perform better than Junior Immunologists and hence may significantly improve their efficacy; compared with two Junior Immunologists, the CAD system showed higher Intensity Accuracy (85,5% versus 66,0% and 66,0%), higher Patterns Accuracy (79,3% versus 48,0% and 66,2%), and higher Mean Class Accuracy (79,4% versus 56,7% and 64.2%). PMID:27042658

  12. [Application of computer-assisted 3D imaging simulation for surgery].

    PubMed

    Matsushita, S; Suzuki, N

    1994-03-01

    This article describes trends in application of various imaging technology in surgical planning, navigation, and computer aided surgery. Imaging information is essential factor for simulation in medicine. It includes three dimensional (3D) image reconstruction, neuro-surgical navigation, creating substantial model based on 3D imaging data and etc. These developments depend mostly on 3D imaging technique, which is much contributed by recent computer technology. 3D imaging can offer new intuitive information to physician and surgeon, and this method is suitable for mechanical control. By utilizing simulated results, we can obtain more precise surgical orientation, estimation, and operation. For more advancement, automatic and high speed recognition of medical imaging is being developed.

  13. Image model: new perspective for image processing and computer vision

    NASA Astrophysics Data System (ADS)

    Ziou, Djemel; Allili, Madjid

    2004-05-01

    We propose a new image model in which the image support and image quantities are modeled using algebraic topology concepts. The image support is viewed as a collection of chains encoding combination of pixels grouped by dimension and linking different dimensions with the boundary operators. Image quantities are encoded using the notion of cochain which associates values for pixels of given dimension that can be scalar, vector, or tensor depending on the problem that is considered. This allows obtaining algebraic equations directly from the physical laws. The coboundary and codual operators, which are generic operations on cochains allow to formulate the classical differential operators as applied for field functions and differential forms in both global and local forms. This image model makes the association between the image support and the image quantities explicit which results in several advantages: it allows the derivation of efficient algorithms that operate in any dimension and the unification of mathematics and physics to solve classical problems in image processing and computer vision. We show the effectiveness of this model by considering the isotropic diffusion.

  14. Low-cost space-varying FIR filter architecture for computational imaging systems

    NASA Astrophysics Data System (ADS)

    Feng, Guotong; Shoaib, Mohammed; Schwartz, Edward L.; Dirk Robinson, M.

    2010-01-01

    Recent research demonstrates the advantage of designing electro-optical imaging systems by jointly optimizing the optical and digital subsystems. The optical systems designed using this joint approach intentionally introduce large and often space-varying optical aberrations that produce blurry optical images. Digital sharpening restores reduced contrast due to these intentional optical aberrations. Computational imaging systems designed in this fashion have several advantages including extended depth-of-field, lower system costs, and improved low-light performance. Currently, most consumer imaging systems lack the necessary computational resources to compensate for these optical systems with large aberrations in the digital processor. Hence, the exploitation of the advantages of the jointly designed computational imaging system requires low-complexity algorithms enabling space-varying sharpening. In this paper, we describe a low-cost algorithmic framework and associated hardware enabling the space-varying finite impulse response (FIR) sharpening required to restore largely aberrated optical images. Our framework leverages the space-varying properties of optical images formed using rotationally-symmetric optical lens elements. First, we describe an approach to leverage the rotational symmetry of the point spread function (PSF) about the optical axis allowing computational savings. Second, we employ a specially designed bank of sharpening filters tuned to the specific radial variation common to optical aberrations. We evaluate the computational efficiency and image quality achieved by using this low-cost space-varying FIR filter architecture.

  15. Computer-aided diagnosis for classifying benign versus malignant thyroid nodules based on ultrasound images: A comparison with radiologist-based assessments.

    PubMed

    Chang, Yongjun; Paul, Anjan Kumar; Kim, Namkug; Baek, Jung Hwan; Choi, Young Jun; Ha, Eun Ju; Lee, Kang Dae; Lee, Hyoung Shin; Shin, DaeSeock; Kim, Nakyoung

    2016-01-01

    To develop a semiautomated computer-aided diagnosis (cad) system for thyroid cancer using two-dimensional ultrasound images that can be used to yield a second opinion in the clinic to differentiate malignant and benign lesions. A total of 118 ultrasound images that included axial and longitudinal images from patients with biopsy-confirmed malignant (n = 30) and benign (n = 29) nodules were collected. Thyroid cad software was developed to extract quantitative features from these images based on thyroid nodule segmentation in which adaptive diffusion flow for active contours was used. Various features, including histogram, intensity differences, elliptical fit, gray-level co-occurrence matrixes, and gray-level run-length matrixes, were evaluated for each region imaged. Based on these imaging features, a support vector machine (SVM) classifier was used to differentiate benign and malignant nodules. Leave-one-out cross-validation with sequential forward feature selection was performed to evaluate the overall accuracy of this method. Additionally, analyses with contingency tables and receiver operating characteristic (ROC) curves were performed to compare the performance of cad with visual inspection by expert radiologists based on established gold standards. Most univariate features for this proposed cad system attained accuracies that ranged from 78.0% to 83.1%. When optimal SVM parameters that were established using a grid search method with features that radiologists use for visual inspection were employed, the authors could attain rates of accuracy that ranged from 72.9% to 84.7%. Using leave-one-out cross-validation results in a multivariate analysis of various features, the highest accuracy achieved using the proposed cad system was 98.3%, whereas visual inspection by radiologists reached 94.9% accuracy. To obtain the highest accuracies, "axial ratio" and "max probability" in axial images were most frequently included in the optimal feature sets for the authors

  16. Thermal Infrared Imaging-Based Computational Psychophysiology for Psychometrics.

    PubMed

    Cardone, Daniela; Pinti, Paola; Merla, Arcangelo

    2015-01-01

    Thermal infrared imaging has been proposed as a potential system for the computational assessment of human autonomic nervous activity and psychophysiological states in a contactless and noninvasive way. Through bioheat modeling of facial thermal imagery, several vital signs can be extracted, including localized blood perfusion, cardiac pulse, breath rate, and sudomotor response, since all these parameters impact the cutaneous temperature. The obtained physiological information could then be used to draw inferences about a variety of psychophysiological or affective states, as proved by the increasing number of psychophysiological studies using thermal infrared imaging. This paper presents therefore a review of the principal achievements of thermal infrared imaging in computational physiology with regard to its capability of monitoring psychophysiological activity.

  17. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.

    PubMed

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-04-07

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  18. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing

    PubMed Central

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-01-01

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate. PMID:27070606

  19. A simple and inexpensive method of preoperative computer imaging for rhinoplasty.

    PubMed

    Ewart, Christopher J; Leonard, Christopher J; Harper, J Garrett; Yu, Jack

    2006-01-01

    GOALS/PURPOSE: Despite concerns of legal liability, preoperative computer imaging has become a popular tool for the plastic surgeon. The ability to project possible surgical outcomes can facilitate communication between the patient and surgeon. It can be an effective tool in the education and training of residents. Unfortunately, these imaging programs are expensive and have a steep learning curve. The purpose of this paper is to present a relatively inexpensive method of preoperative computer imaging with a reasonable learning curve. The price of currently available imaging programs was acquired through an online search, and inquiries were made to the software distributors. Their prices were compared to Adobe PhotoShop, which has special filters called "liquify" and "photocopy." It was used in the preoperative computer planning of 2 patients who presented for rhinoplasty at our institution. Projected images were created based on harmonious discussions between the patient and physician. Importantly, these images were presented to the patient as potential results, with no guarantees as to actual outcomes. Adobe PhotoShop can be purchased for 900-5800 dollars less than the leading computer imaging software for cosmetic rhinoplasty. Effective projected images were created using the "liquify" and "photocopy" filters in PhotoShop. Both patients had surgical planning and operations based on these images. They were satisfied with the results. Preoperative computer imaging can be a very effective tool for the plastic surgeon by providing improved physician-patient communication, increased patient confidence, and enhanced surgical planning. Adobe PhotoShop is a relatively inexpensive program that can provide these benefits using only 1 or 2 features.

  20. Cloud computing in medical imaging.

    PubMed

    Kagadis, George C; Kloukinas, Christos; Moore, Kevin; Philbin, Jim; Papadimitroulas, Panagiotis; Alexakos, Christos; Nagy, Paul G; Visvikis, Dimitris; Hendee, William R

    2013-07-01

    Over the past century technology has played a decisive role in defining, driving, and reinventing procedures, devices, and pharmaceuticals in healthcare. Cloud computing has been introduced only recently but is already one of the major topics of discussion in research and clinical settings. The provision of extensive, easily accessible, and reconfigurable resources such as virtual systems, platforms, and applications with low service cost has caught the attention of many researchers and clinicians. Healthcare researchers are moving their efforts to the cloud, because they need adequate resources to process, store, exchange, and use large quantities of medical data. This Vision 20/20 paper addresses major questions related to the applicability of advanced cloud computing in medical imaging. The paper also considers security and ethical issues that accompany cloud computing.

  1. Sharp-Focus Composite Microscope Imaging by Computer

    NASA Technical Reports Server (NTRS)

    Wall, R. J.

    1983-01-01

    Enhanced depth of focus aids medical analysis. Computer image-processing system synthesizes sharply-focused composite picture from series of photomicrographs of same object taken at different depths. Computer rejects blured parts of each photomicrograph. Remaining in focus portions form focused composite. System used to study alveolar lung tissue and has applications in medicine and physical sciences.

  2. Efficient image compression algorithm for computer-animated images

    NASA Astrophysics Data System (ADS)

    Yfantis, Evangelos A.; Au, Matthew Y.; Miel, G.

    1992-10-01

    An image compression algorithm is described. The algorithm is an extension of the run-length image compression algorithm and its implementation is relatively easy. This algorithm was implemented and compared with other existing popular compression algorithms and with the Lempel-Ziv (LZ) coding. The Lempel-Ziv algorithm is available as a utility in the UNIX operating system and is also referred to as the UNIX uncompress. Sometimes our algorithm is best in terms of saving memory space, and sometimes one of the competing algorithms is best. The algorithm is lossless, and the intent is for the algorithm to be used in computer graphics animated images. Comparisons made with the LZ algorithm indicate that the decompression time using our algorithm is faster than that using the LZ algorithm. Once the data are in memory, a relatively simple and fast transformation is applied to uncompress the file.

  3. Computer model for harmonic ultrasound imaging.

    PubMed

    Li, Y; Zagzebski, J A

    2000-01-01

    Harmonic ultrasound imaging has received great attention from ultrasound scanner manufacturers and researchers. In this paper, we present a computer model that can generate realistic harmonic images. In this model, the incident ultrasound is modeled after the "KZK" equation, and the echo signal is modeled using linear propagation theory because the echo signal is much weaker than the incident pulse. Both time domain and frequency domain numerical solutions to the "KZK" equation were studied. Realistic harmonic images of spherical lesion phantoms were generated for scans by a circular transducer. This model can be a very useful tool for studying the harmonic buildup and dissipation processes in a nonlinear medium, and it can be used to investigate a wide variety of topics related to B-mode harmonic imaging.

  4. Computer model for harmonic ultrasound imaging.

    PubMed

    Li, Y; Zagzebski, J A

    2000-01-01

    Harmonic ultrasound imaging has received great attention from ultrasound scanner manufacturers and researchers. Here, the authors present a computer model that can generate realistic harmonic images. In this model, the incident ultrasound is modeled after the "KZK" equation, and the echo signal is modeled using linear propagation theory because the echo signal is much weaker than the incident pulse. Both time domain and frequency domain numerical solutions to the "KZK" equation were studied. Realistic harmonic images of spherical lesion phantoms were generated for scans by a circular transducer. This model can be a very useful tool for studying the harmonic buildup and dissipation processes in a nonlinear medium, and it can be used to investigate a wide variety of topics related to B-mode harmonic imaging.

  5. Security screening via computational imaging using frequency-diverse metasurface apertures

    NASA Astrophysics Data System (ADS)

    Smith, David R.; Reynolds, Matthew S.; Gollub, Jonah N.; Marks, Daniel L.; Imani, Mohammadreza F.; Yurduseven, Okan; Arnitz, Daniel; Pedross-Engel, Andreas; Sleasman, Timothy; Trofatter, Parker; Boyarsky, Michael; Rose, Alec; Odabasi, Hayrettin; Lipworth, Guy

    2017-05-01

    Computational imaging is a proven strategy for obtaining high-quality images with fast acquisition rates and simpler hardware. Metasurfaces provide exquisite control over electromagnetic fields, enabling the radiated field to be molded into unique patterns. The fusion of these two concepts can bring about revolutionary advances in the design of imaging systems for security screening. In the context of computational imaging, each field pattern serves as a single measurement of a scene; imaging a scene can then be interpreted as estimating the reflectivity distribution of a target from a set of measurements. As with any computational imaging system, the key challenge is to arrive at a minimal set of measurements from which a diffraction-limited image can be resolved. Here, we show that the information content of a frequency-diverse metasurface aperture can be maximized by design, and used to construct a complete millimeter-wave imaging system spanning a 2 m by 2 m area, consisting of 96 metasurfaces, capable of producing diffraction-limited images of human-scale targets. The metasurfacebased frequency-diverse system presented in this work represents an inexpensive, but tremendously flexible alternative to traditional hardware paradigms, offering the possibility of low-cost, real-time, and ubiquitous screening platforms.

  6. MDA-image: an environment of networked desktop computers for teleradiology/pathology.

    PubMed

    Moffitt, M E; Richli, W R; Carrasco, C H; Wallace, S; Zimmerman, S O; Ayala, A G; Benjamin, R S; Chee, S; Wood, P; Daniels, P

    1991-04-01

    MDA-Image, a project of The University of Texas M. D. Anderson Cancer Center, is an environment of networked desktop computers for teleradiology/pathology. Radiographic film is digitized with a film scanner and histopathologic slides are digitized using a red, green, and blue (RGB) video camera connected to a microscope. Digitized images are stored on a data server connected to the institution's computer communication network (Ethernet) and can be displayed from authorized desktop computers connected to Ethernet. Images are digitized for cases presented at the Bone Tumor Management Conference, a multidisciplinary conference in which treatment options are discussed among clinicians, surgeons, radiologists, pathologists, radiotherapists, and medical oncologists. These radiographic and histologic images are shown on a large screen computer monitor during the conference. They are available for later review for follow-up or representation.

  7. Computer assisted analysis of medical x-ray images

    NASA Astrophysics Data System (ADS)

    Bengtsson, Ewert

    1996-01-01

    X-rays were originally used to expose film. The early computers did not have enough capacity to handle images with useful resolution. The rapid development of computer technology over the last few decades has, however, led to the introduction of computers into radiology. In this overview paper, the various possible roles of computers in radiology are examined. The state of the art is briefly presented, and some predictions about the future are made.

  8. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    PubMed

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  9. Thermal Infrared Imaging-Based Computational Psychophysiology for Psychometrics

    PubMed Central

    Cardone, Daniela; Pinti, Paola; Merla, Arcangelo

    2015-01-01

    Thermal infrared imaging has been proposed as a potential system for the computational assessment of human autonomic nervous activity and psychophysiological states in a contactless and noninvasive way. Through bioheat modeling of facial thermal imagery, several vital signs can be extracted, including localized blood perfusion, cardiac pulse, breath rate, and sudomotor response, since all these parameters impact the cutaneous temperature. The obtained physiological information could then be used to draw inferences about a variety of psychophysiological or affective states, as proved by the increasing number of psychophysiological studies using thermal infrared imaging. This paper presents therefore a review of the principal achievements of thermal infrared imaging in computational physiology with regard to its capability of monitoring psychophysiological activity. PMID:26339284

  10. Consequences of increased use of computed tomography imaging for trauma patients in rural referring hospitals prior to transfer to a regional trauma centre.

    PubMed

    Berkseth, Timothy J; Mathiason, Michelle A; Jafari, Mary Ellen; Cogbill, Thomas H; Patel, Nirav Y

    2014-05-01

    Computed tomography (CT) plays an integral role in the evaluation and management of trauma patients. As the number of referring hospital (RH)-based CT scanners increased, so has their utilization in trauma patients before transfer. We hypothesized that this has resulted in increased time at RH, image duplication, and radiation dose. A retrospective chart review was completed for trauma activations transferred to an ACS-verified Level II Trauma Centre (TC) during two time periods: 2002-2004 (Group 1) and 2006-2008 (Group 2). 2005 data were excluded as this marked the transition period for acquisition of hospital-based CT scanners in RH. Statistical analysis included t test and χ(2) analysis. P<0.05 was considered significant. 1017 patients met study criteria: 503 in group 1 and 514 in group 2. Mean age was greater in group 2 compared to group 1 (40.3 versus 37.4, respectively; P=0.028). There were 115 patients in group 1 versus 202 patients in group 2 who underwent CT imaging at RH (P<0.001). Conversely, 326 patients in group 1 had CT scans performed at the TC versus 258 patients in group 2 (P<0.001). Mean time at the RH was similar between the groups (117.1 and 112.3min for group 1 and 2, respectively; P=0.561). However, when comparing patients with and without a pretransfer CT at the RH, the median time at RH was 140 versus 67min, respectively (P<0.001). The number of patients with duplicate CT imaging (n=34 in group 1 and n=42 in group 2) was not significantly different between the two time periods (P=0.392). Head CTs comprised the majority of duplicate CT imaging in both time periods (82.4% in group 1 and 90.5% in group 2). Mean total estimated radiation dose per patient was not significantly different between the two groups (group 1=8.4mSv versus group 2=7.8mSv; P=0.192). A significant increase in CT imaging at the RH prior to transfer to the TC was observed over the study periods. No associated increases in mean time at the RH, image duplication at TC, total

  11. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  12. Computer-aided diagnosis and artificial intelligence in clinical imaging.

    PubMed

    Shiraishi, Junji; Li, Qiang; Appelbaum, Daniel; Doi, Kunio

    2011-11-01

    Computer-aided diagnosis (CAD) is rapidly entering the radiology mainstream. It has already become a part of the routine clinical work for the detection of breast cancer with mammograms. The computer output is used as a "second opinion" in assisting radiologists' image interpretations. The computer algorithm generally consists of several steps that may include image processing, image feature analysis, and data classification via the use of tools such as artificial neural networks (ANN). In this article, we will explore these and other current processes that have come to be referred to as "artificial intelligence." One element of CAD, temporal subtraction, has been applied for enhancing interval changes and for suppressing unchanged structures (eg, normal structures) between 2 successive radiologic images. To reduce misregistration artifacts on the temporal subtraction images, a nonlinear image warping technique for matching the previous image to the current one has been developed. Development of the temporal subtraction method originated with chest radiographs, with the method subsequently being applied to chest computed tomography (CT) and nuclear medicine bone scans. The usefulness of the temporal subtraction method for bone scans was demonstrated by an observer study in which reading times and diagnostic accuracy improved significantly. An additional prospective clinical study verified that the temporal subtraction image could be used as a "second opinion" by radiologists with negligible detrimental effects. ANN was first used in 1990 for computerized differential diagnosis of interstitial lung diseases in CAD. Since then, ANN has been widely used in CAD schemes for the detection and diagnosis of various diseases in different imaging modalities, including the differential diagnosis of lung nodules and interstitial lung diseases in chest radiography, CT, and position emission tomography/CT. It is likely that CAD will be integrated into picture archiving and

  13. RATIO_TOOL - SOFTWARE FOR COMPUTING IMAGE RATIOS

    NASA Technical Reports Server (NTRS)

    Yates, G. L.

    1994-01-01

    Geological studies analyze spectral data in order to gain information on surface materials. RATIO_TOOL is an interactive program for viewing and analyzing large multispectral image data sets that have been created by an imaging spectrometer. While the standard approach to classification of multispectral data is to match the spectrum for each input pixel against a library of known mineral spectra, RATIO_TOOL uses ratios of spectral bands in order to spot significant areas of interest within a multispectral image. Each image band can be viewed iteratively, or a selected image band of the data set can be requested and displayed. When the image ratios are computed, the result is displayed as a gray scale image. At this point a histogram option helps in viewing the distribution of values. A thresholding option can then be used to segment the ratio image result into two to four classes. The segmented image is then color coded to indicate threshold classes and displayed alongside the gray scale image. RATIO_TOOL is written in C language for Sun series computers running SunOS 4.0 and later. It requires the XView toolkit and the OpenWindows window manager (version 2.0 or 3.0). The XView toolkit is distributed with Open Windows. A color monitor is also required. The standard distribution medium for RATIO_TOOL is a .25 inch streaming magnetic tape cartridge in UNIX tar format. An electronic copy of the documentation is included on the program media. RATIO_TOOL was developed in 1992 and is a copyrighted work with all copyright vested in NASA. Sun, SunOS, and OpenWindows are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.

  14. Computer-aided diagnosis for classifying benign versus malignant thyroid nodules based on ultrasound images: A comparison with radiologist-based assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Yongjun; Paul, Anjan Kumar; Kim, Namkug, E-mail: namkugkim@gmail.com

    Purpose: To develop a semiautomated computer-aided diagnosis (CAD) system for thyroid cancer using two-dimensional ultrasound images that can be used to yield a second opinion in the clinic to differentiate malignant and benign lesions. Methods: A total of 118 ultrasound images that included axial and longitudinal images from patients with biopsy-confirmed malignant (n = 30) and benign (n = 29) nodules were collected. Thyroid CAD software was developed to extract quantitative features from these images based on thyroid nodule segmentation in which adaptive diffusion flow for active contours was used. Various features, including histogram, intensity differences, elliptical fit, gray-level co-occurrencemore » matrixes, and gray-level run-length matrixes, were evaluated for each region imaged. Based on these imaging features, a support vector machine (SVM) classifier was used to differentiate benign and malignant nodules. Leave-one-out cross-validation with sequential forward feature selection was performed to evaluate the overall accuracy of this method. Additionally, analyses with contingency tables and receiver operating characteristic (ROC) curves were performed to compare the performance of CAD with visual inspection by expert radiologists based on established gold standards. Results: Most univariate features for this proposed CAD system attained accuracies that ranged from 78.0% to 83.1%. When optimal SVM parameters that were established using a grid search method with features that radiologists use for visual inspection were employed, the authors could attain rates of accuracy that ranged from 72.9% to 84.7%. Using leave-one-out cross-validation results in a multivariate analysis of various features, the highest accuracy achieved using the proposed CAD system was 98.3%, whereas visual inspection by radiologists reached 94.9% accuracy. To obtain the highest accuracies, “axial ratio” and “max probability” in axial images were most frequently included in

  15. A Computational Experiment of the Endo versus Exo Preference in a Diels-Alder Reaction

    ERIC Educational Resources Information Center

    Rowley, Christopher N.; Woo, Tom K.

    2009-01-01

    We have developed and tested a computational laboratory that investigates an endo versus exo Diels-Alder cycloaddition. This laboratory employed density functional theory (DFT) calculations to study the cycloaddition of N-phenylmaleimide to furan. The endo and exo stereoisomers of the product were distinguished by building the two isomers in a…

  16. Medical imaging and computers in the diagnosis of breast cancer

    NASA Astrophysics Data System (ADS)

    Giger, Maryellen L.

    2014-09-01

    Computer-aided diagnosis (CAD) and quantitative image analysis (QIA) methods (i.e., computerized methods of analyzing digital breast images: mammograms, ultrasound, and magnetic resonance images) can yield novel image-based tumor and parenchyma characteristics (i.e., signatures that may ultimately contribute to the design of patient-specific breast cancer management plans). The role of QIA/CAD has been expanding beyond screening programs towards applications in risk assessment, diagnosis, prognosis, and response to therapy as well as in data mining to discover relationships of image-based lesion characteristics with genomics and other phenotypes; thus, as they apply to disease states. These various computer-based applications are demonstrated through research examples from the Giger Lab.

  17. Photographic copy of computer enhanced color photographic image. Photographer and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photographic copy of computer enhanced color photographic image. Photographer and computer draftsman unknown. Original photographic image located in the office of Modjeski and Masters, Consulting Engineers at 1055 St. Charles Avenue, New Orleans, LA 70130. COMPUTER ENHANCED COLOR PHOTOGRAPH SHOWING THE PROPOSED HUEY P. LONG BRIDGE WIDENING LOOKING FROM THE WEST BANK TOWARD THE EAST BANK. - Huey P. Long Bridge, Spanning Mississippi River approximately midway between nine & twelve mile points upstream from & west of New Orleans, Jefferson, Jefferson Parish, LA

  18. Comparison of the signal-to-noise characteristics of quantum versus thermal ghost imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Sullivan, Malcolm N.; Chan, Kam Wai Clifford; Boyd, Robert W.

    2010-11-15

    We present a theoretical comparison of the signal-to-noise characteristics of quantum versus thermal ghost imaging. We first calculate the signal-to-noise ratio of each process in terms of its controllable experimental conditions. We show that a key distinction is that a thermal ghost image always resides on top of a large background; the fluctuations in this background constitutes an intrinsic noise source for thermal ghost imaging. In contrast, there is a negligible intrinsic background to a quantum ghost image. However, for practical reasons involving achievable illumination levels, acquisition times for thermal ghost images are often much shorter than those for quantummore » ghost images. We provide quantitative predictions for the conditions under which each process provides superior performance. Our conclusion is that each process can provide useful functionality, although under complementary conditions.« less

  19. Image based Monte Carlo Modeling for Computational Phantom

    NASA Astrophysics Data System (ADS)

    Cheng, Mengyun; Wang, Wen; Zhao, Kai; Fan, Yanchang; Long, Pengcheng; Wu, Yican

    2014-06-01

    The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verfication of the models for Monte carlo(MC)simulation are very tedious, error-prone and time-consuming. In addiation, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling by FDS Team (Advanced Nuclear Energy Research Team, http://www.fds.org.cn). The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients(Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection.

  20. Reduction of noise and image artifacts in computed tomography by nonlinear filtration of projection images

    NASA Astrophysics Data System (ADS)

    Demirkaya, Omer

    2001-07-01

    This study investigates the efficacy of filtering two-dimensional (2D) projection images of Computer Tomography (CT) by the nonlinear diffusion filtration in removing the statistical noise prior to reconstruction. The projection images of Shepp-Logan head phantom were degraded by Gaussian noise. The variance of the Gaussian distribution was adaptively changed depending on the intensity at a given pixel in the projection image. The corrupted projection images were then filtered using the nonlinear anisotropic diffusion filter. The filtered projections as well as original noisy projections were reconstructed using filtered backprojection (FBP) with Ram-Lak filter and/or Hanning window. The ensemble variance was computed for each pixel on a slice. The nonlinear filtering of projection images improved the SNR substantially, on the order of fourfold, in these synthetic images. The comparison of intensity profiles across a cross-sectional slice indicated that the filtering did not result in any significant loss of image resolution.

  1. Computer Generated Image: Relative Training Effectiveness of Day Versus Night Visual Scenes. Final Report.

    ERIC Educational Resources Information Center

    Martin, Elizabeth L.; Cataneo, Daniel F.

    A study was conducted by the Air Force to determine the extent to which takeoff/landing skills learned in a simulator equipped with a night visual system would transfer to daytime performance in the aircraft. A transfer-of-training design was used to assess the differential effectiveness of simulator training with a day versus a night…

  2. Simple computer method provides contours for radiological images

    NASA Technical Reports Server (NTRS)

    Newell, J. D.; Keller, R. A.; Baily, N. A.

    1975-01-01

    Computer is provided with information concerning boundaries in total image. Gradient of each point in digitized image is calculated with aid of threshold technique; then there is invoked set of algorithms designed to reduce number of gradient elements and to retain only major ones for definition of contour.

  3. Quantitative Pulmonary Imaging Using Computed Tomography and Magnetic Resonance Imaging

    PubMed Central

    Washko, George R.; Parraga, Grace; Coxson, Harvey O.

    2011-01-01

    Measurements of lung function, including spirometry and body plethesmography, are easy to perform and are the current clinical standard for assessing disease severity. However, these lung functional techniques do not adequately explain the observed variability in clinical manifestations of disease and offer little insight into the relationship of lung structure and function. Lung imaging and the image based assessment of lung disease has matured to the extent that it is common for clinical, epidemiologic, and genetic investigation to have a component dedicated to image analysis. There are several exciting imaging modalities currently being used for the non-invasive study of lung anatomy and function. In this review we will focus on two of them, x-ray computed tomography and magnetic resonance imaging. Following a brief introduction of each method we detail some of the most recent work being done to characterize smoking-related lung disease and the clinical applications of such knowledge. PMID:22142490

  4. A novel computer-aided detection system for pulmonary nodule identification in CT images

    NASA Astrophysics Data System (ADS)

    Han, Hao; Li, Lihong; Wang, Huafeng; Zhang, Hao; Moore, William; Liang, Zhengrong

    2014-03-01

    Computer-aided detection (CADe) of pulmonary nodules from computer tomography (CT) scans is critical for assisting radiologists to identify lung lesions at an early stage. In this paper, we propose a novel approach for CADe of lung nodules using a two-stage vector quantization (VQ) scheme. The first-stage VQ aims to extract lung from the chest volume, while the second-stage VQ is designed to extract initial nodule candidates (INCs) within the lung volume. Then rule-based expert filtering is employed to prune obvious FPs from INCs, and the commonly-used support vector machine (SVM) classifier is adopted to further reduce the FPs. The proposed system was validated on 100 CT scans randomly selected from the 262 scans that have at least one juxta-pleural nodule annotation in the publicly available database - Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The two-stage VQ only missed 2 out of the 207 nodules at agreement level 1, and the INCs detection for each scan took about 30 seconds in average. Expert filtering reduced FPs more than 18 times, while maintaining a sensitivity of 93.24%. As it is trivial to distinguish INCs attached to pleural wall versus not on wall, we investigated the feasibility of training different SVM classifiers to further reduce FPs from these two kinds of INCs. Experiment results indicated that SVM classification over the entire set of INCs was in favor of, where the optimal operating of our CADe system achieved a sensitivity of 89.4% at a specificity of 86.8%.

  5. Comparison of onboard low-field magnetic resonance imaging versus onboard computed tomography for anatomy visualization in radiotherapy.

    PubMed

    Noel, Camille E; Parikh, Parag J; Spencer, Christopher R; Green, Olga L; Hu, Yanle; Mutic, Sasa; Olsen, Jeffrey R

    2015-01-01

    Onboard magnetic resonance imaging (OB-MRI) for daily localization and adaptive radiotherapy has been under development by several groups. However, no clinical studies have evaluated whether OB-MRI improves visualization of the target and organs at risk (OARs) compared to standard onboard computed tomography (OB-CT). This study compared visualization of patient anatomy on images acquired on the MRI-(60)Co ViewRay system to those acquired with OB-CT. Fourteen patients enrolled on a protocol approved by the Institutional Review Board (IRB) and undergoing image-guided radiotherapy for cancer in the thorax (n = 2), pelvis (n = 6), abdomen (n = 3) or head and neck (n = 3) were imaged with OB-MRI and OB-CT. For each of the 14 patients, the OB-MRI and OB-CT datasets were displayed side-by-side and independently reviewed by three radiation oncologists. Each physician was asked to evaluate which dataset offered better visualization of the target and OARs. A quantitative contouring study was performed on two abdominal patients to assess if OB-MRI could offer improved inter-observer segmentation agreement for adaptive planning. In total 221 OARs and 10 targets were compared for visualization on OB-MRI and OB-CT by each of the three physicians. The majority of physicians (two or more) evaluated visualization on MRI as better for 71% of structures, worse for 10% of structures, and equivalent for 14% of structures. 5% of structures were not visible on either. Physicians agreed unanimously for 74% and in majority for > 99% of structures. Targets were better visualized on MRI in 4/10 cases, and never on OB-CT. Low-field MR provides better anatomic visualization of many radiotherapy targets and most OARs as compared to OB-CT. Further studies with OB-MRI should be pursued.

  6. Imaging and computational considerations for image computed permeability: Operating envelope of Digital Rock Physics

    NASA Astrophysics Data System (ADS)

    Saxena, Nishank; Hows, Amie; Hofmann, Ronny; Alpak, Faruk O.; Freeman, Justin; Hunter, Sander; Appel, Matthias

    2018-06-01

    This study defines the optimal operating envelope of the Digital Rock technology from the perspective of imaging and numerical simulations of transport properties. Imaging larger volumes of rocks for Digital Rock Physics (DRP) analysis improves the chances of achieving a Representative Elementary Volume (REV) at which flow-based simulations (1) do not vary with change in rock volume, and (2) is insensitive to the choice of boundary conditions. However, this often comes at the expense of image resolution. This trade-off exists due to the finiteness of current state-of-the-art imaging detectors. Imaging and analyzing digital rocks that sample the REV and still sufficiently resolve pore throats is critical to ensure simulation quality and robustness of rock property trends for further analysis. We find that at least 10 voxels are needed to sufficiently resolve pore throats for single phase fluid flow simulations. If this condition is not met, additional analyses and corrections may allow for meaningful comparisons between simulation results and laboratory measurements of permeability, but some cases may fall outside the current technical feasibility of DRP. On the other hand, we find that the ratio of field of view and effective grain size provides a reliable measure of the REV for siliciclastic rocks. If this ratio is greater than 5, the coefficient of variation for single-phase permeability simulations drops below 15%. These imaging considerations are crucial when comparing digitally computed rock flow properties with those measured in the laboratory. We find that the current imaging methods are sufficient to achieve both REV (with respect to numerical boundary conditions) and required image resolution to perform digital core analysis for coarse to fine-grained sandstones.

  7. Rating Nasolabial Aesthetics in Unilateral Cleft Lip and Palate Patients: Cropped Versus Full-Face Images.

    PubMed

    Schwirtz, Roderic M F; Mulder, Frans J; Mosmuller, David G M; Tan, Robin A; Maal, Thomas J; Prahl, Charlotte; de Vet, Henrica C W; Don Griot, J Peter W

    2018-05-01

    To determine if cropping facial images affects nasolabial aesthetics assessments in unilateral cleft lip patients and to evaluate the effect of facial attractiveness on nasolabial evaluation. Two cleft surgeons and one cleft orthodontist assessed standardized frontal photographs 4 times; nasolabial aesthetics were rated on cropped and full-face images using the Cleft Aesthetic Rating Scale, and total facial attractiveness was rated on full-face images with and without the nasolabial area blurred using a 5-point Likert scale. Cleft Palate Craniofacial Unit of a University Medical Center. Inclusion criteria: nonsyndromic unilateral cleft lip and an available frontal view photograph around 10 years of age. a history of facial trauma and an incomplete cleft. Eighty-one photographs were available for assessment. Differences in mean CARS scores between cropped versus full-face photographs and attractive versus unattractive rated patients were evaluated by paired t test. Nasolabial aesthetics are scored more negatively on full-face photographs compared to cropped photographs, regardless of facial attractiveness. (Mean CARS score, nose: cropped = 2.8, full-face = 3.0, P < .001; lip: cropped = 2.4, full-face = 2.7, P < .001; nose and lip: cropped = 2.6, full-face = 2.8, P < .001). Aesthetic outcomes of the nasolabial area are assessed significantly more positively when using cropped images compared to full-face images. For this reason, cropping images, revealing the nasolabial area only, is recommended for aesthetical assessments.

  8. Computational Intelligence for Medical Imaging Simulations.

    PubMed

    Chang, Victor

    2017-11-25

    This paper describes how to simulate medical imaging by computational intelligence to explore areas that cannot be easily achieved by traditional ways, including genes and proteins simulations related to cancer development and immunity. This paper has presented simulations and virtual inspections of BIRC3, BIRC6, CCL4, KLKB1 and CYP2A6 with their outputs and explanations, as well as brain segment intensity due to dancing. Our proposed MapReduce framework with the fusion algorithm can simulate medical imaging. The concept is very similar to the digital surface theories to simulate how biological units can get together to form bigger units, until the formation of the entire unit of biological subject. The M-Fusion and M-Update function by the fusion algorithm can achieve a good performance evaluation which can process and visualize up to 40 GB of data within 600 s. We conclude that computational intelligence can provide effective and efficient healthcare research offered by simulations and visualization.

  9. Magnetic resonance imaging detection of early experimental periostitis. Comparison of magnetic resonance imaging, computed tomography, and plain radiography with histopathologic correlation.

    PubMed

    Spaeth, H J; Chandnani, V P; Beltran, J; Lucas, J G; Ortiz, I; King, M A; Bennett, W F; Bova, J G; Mueller, C F; Shaffer, P B

    1991-04-01

    This study characterizes the appearance of periosteal reaction by magnetic resonance imaging (MRI), and evaluates the efficacy of MRI versus computed tomography (CT), and plain film radiography (PF) in detecting early, experimentally induced periostitis. Acute Staphylococcus aureus osteomyelitis was induced in 30 legs of 20 New Zealand white rabbits. The rabbits were then imaged with MR, contrast-unenhanced CT, and PF 4 days after infection. Histologically, periosteal elevation was present in 27 cases. Periosteal ossification was seen in 23 cases, and cellular reaction without ossification in 4 cases. Periosteal reaction was demonstrated by PF in 21 (78%) and by CT in 20 (74%) cases. Evidence of periostitis was seen by MR in all 27% (100%) cases. MR resulted in two false-positive diagnoses. Multiple concentric, alternating high and low signal arcs demonstrated by MR in 19 (70%) cases represented periosteal ossification surrounded by fibrous or granulation tissue. These findings demonstrate the ability of MR to detect periostitis despite the absence of periosteal ossification. MR was more sensitive than CT (P less than .05) or PF (P less than .05) in the detection of experimentally induced periostitis.

  10. Computing a Non-trivial Lower Bound on the Joint Entropy between Two Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S.

    In this report, a non-trivial lower bound on the joint entropy of two non-identical images is developed, which is greater than the individual entropies of the images. The lower bound is the least joint entropy possible among all pairs of images that have the same histograms as those of the given images. New algorithms are presented to compute the joint entropy lower bound with a computation time proportional to S log S where S is the number of histogram bins of the images. This is faster than the traditional methods of computing the exact joint entropy with a computation timemore » that is quadratic in S .« less

  11. Implementation of Multispectral Image Classification on a Remote Adaptive Computer

    NASA Technical Reports Server (NTRS)

    Figueiredo, Marco A.; Gloster, Clay S.; Stephens, Mark; Graves, Corey A.; Nakkar, Mouna

    1999-01-01

    As the demand for higher performance computers for the processing of remote sensing science algorithms increases, the need to investigate new computing paradigms its justified. Field Programmable Gate Arrays enable the implementation of algorithms at the hardware gate level, leading to orders of m a,gnitude performance increase over microprocessor based systems. The automatic classification of spaceborne multispectral images is an example of a computation intensive application, that, can benefit from implementation on an FPGA - based custom computing machine (adaptive or reconfigurable computer). A probabilistic neural network is used here to classify pixels of of a multispectral LANDSAT-2 image. The implementation described utilizes Java client/server application programs to access the adaptive computer from a remote site. Results verify that a remote hardware version of the algorithm (implemented on an adaptive computer) is significantly faster than a local software version of the same algorithm implemented on a typical general - purpose computer).

  12. Computer-aided diagnosis in radiological imaging: current status and future challenges

    NASA Astrophysics Data System (ADS)

    Doi, Kunio

    2009-10-01

    Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. Many different types of CAD schemes are being developed for detection and/or characterization of various lesions in medical imaging, including conventional projection radiography, CT, MRI, and ultrasound imaging. Commercial systems for detection of breast lesions on mammograms have been developed and have received FDA approval for clinical use. CAD may be defined as a diagnosis made by a physician who takes into account the computer output as a "second opinion". The purpose of CAD is to improve the quality and productivity of physicians in their interpretation of radiologic images. The quality of their work can be improved in terms of the accuracy and consistency of their radiologic diagnoses. In addition, the productivity of radiologists is expected to be improved by a reduction in the time required for their image readings. The computer output is derived from quantitative analysis of radiologic images by use of various methods and techniques in computer vision, artificial intelligence, and artificial neural networks (ANNs). The computer output may indicate a number of important parameters, for example, the locations of potential lesions such as lung cancer and breast cancer, the likelihood of malignancy of detected lesions, and the likelihood of various diseases based on differential diagnosis in a given image and clinical parameters. In this review article, the basic concept of CAD is first defined, and the current status of CAD research is then described. In addition, the potential of CAD in the future is discussed and predicted.

  13. A Versatile Image Processor For Digital Diagnostic Imaging And Its Application In Computed Radiography

    NASA Astrophysics Data System (ADS)

    Blume, H.; Alexandru, R.; Applegate, R.; Giordano, T.; Kamiya, K.; Kresina, R.

    1986-06-01

    In a digital diagnostic imaging department, the majority of operations for handling and processing of images can be grouped into a small set of basic operations, such as image data buffering and storage, image processing and analysis, image display, image data transmission and image data compression. These operations occur in almost all nodes of the diagnostic imaging communications network of the department. An image processor architecture was developed in which each of these functions has been mapped into hardware and software modules. The modular approach has advantages in terms of economics, service, expandability and upgradeability. The architectural design is based on the principles of hierarchical functionality, distributed and parallel processing and aims at real time response. Parallel processing and real time response is facilitated in part by a dual bus system: a VME control bus and a high speed image data bus, consisting of 8 independent parallel 16-bit busses, capable of handling combined up to 144 MBytes/sec. The presented image processor is versatile enough to meet the video rate processing needs of digital subtraction angiography, the large pixel matrix processing requirements of static projection radiography, or the broad range of manipulation and display needs of a multi-modality diagnostic work station. Several hardware modules are described in detail. For illustrating the capabilities of the image processor, processed 2000 x 2000 pixel computed radiographs are shown and estimated computation times for executing the processing opera-tions are presented.

  14. Prior image constrained scatter correction in cone-beam computed tomography image-guided radiation therapy.

    PubMed

    Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong

    2011-02-21

    X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.

  15. Image communication scheme based on dynamic visual cryptography and computer generated holography

    NASA Astrophysics Data System (ADS)

    Palevicius, Paulius; Ragulskis, Minvydas

    2015-01-01

    Computer generated holograms are often exploited to implement optical encryption schemes. This paper proposes the integration of dynamic visual cryptography (an optical technique based on the interplay of visual cryptography and time-averaging geometric moiré) with Gerchberg-Saxton algorithm. A stochastic moiré grating is used to embed the secret into a single cover image. The secret can be visually decoded by a naked eye if only the amplitude of harmonic oscillations corresponds to an accurately preselected value. The proposed visual image encryption scheme is based on computer generated holography, optical time-averaging moiré and principles of dynamic visual cryptography. Dynamic visual cryptography is used both for the initial encryption of the secret image and for the final decryption. Phase data of the encrypted image are computed by using Gerchberg-Saxton algorithm. The optical image is decrypted using the computationally reconstructed field of amplitudes.

  16. Optimisation of coronary vascular territorial 3D echocardiographic strain imaging using computed tomography: a feasibility study using image fusion.

    PubMed

    de Knegt, Martina Chantal; Fuchs, A; Weeke, P; Møgelvang, R; Hassager, C; Kofoed, K F

    2016-12-01

    Current echocardiographic assessments of coronary vascular territories use the 17-segment model and are based on general assumptions of coronary vascular distribution. Fusion of 3D echocardiography (3DE) with multidetector computed tomography (MDCT) derived coronary anatomy may provide a more accurate assessment of left ventricular (LV) territorial function. We aimed to test the feasibility of MDCT and 3DE fusion and to compare territorial longitudinal strain (LS) using the 17-segment model and a MDCT-guided vascular model. 28 patients underwent 320-slice MDCT and transthoracic 3DE on the same day followed by invasive coronary angiography. MDCT (Aquilion ONE, ViSION Edition, Toshiba Medical Systems) and 3DE apical full-volume images (Artida, Toshiba Medical Systems) were fused offline using a dedicated workstation (prototype fusion software, Toshiba Medical Systems). 3DE/MDCT image alignment was assessed by 3 readers using a 4-point scale. Territorial LS was assessed using the 17-segment model and the MDCT-guided vascular model in territories supplied by significantly stenotic and non-significantly stenotic vessels. Successful 3DE/MDCT image alignment was obtained in 86 and 93 % of cases for reader one, and reader two and three, respectively. Fair agreement on the quality of automatic image alignment (intra-class correlation = 0.40) and the success of manual image alignment (Fleiss' Kappa = 0.40) among the readers was found. In territories supplied by non-significantly stenotic left circumflex arteries, LS was significantly higher in the MDCT-guided vascular model compared to the 17-segment model: -15.00 ± 7.17 (mean ± standard deviation) versus -11.87 ± 4.09 (p < 0.05). Fusion of MDCT and 3DE is feasible and provides physiologically meaningful displays of myocardial function.

  17. Computer imaging and workflow systems in the business office.

    PubMed

    Adams, W T; Veale, F H; Helmick, P M

    1999-05-01

    Computer imaging and workflow technology automates many business processes that currently are performed using paper processes. Documents are scanned into the imaging system and placed in electronic patient account folders. Authorized users throughout the organization, including preadmission, verification, admission, billing, cash posting, customer service, and financial counseling staff, have online access to the information they need when they need it. Such streamlining of business functions can increase collections and customer satisfaction while reducing labor, supply, and storage costs. Because the costs of a comprehensive computer imaging and workflow system can be considerable, healthcare organizations should consider implementing parts of such systems that can be cost-justified or include implementation as part of a larger strategic technology initiative.

  18. Observation of topological surface states and strong electron/hole imbalance in extreme magnetoresistance compound LaBi

    NASA Astrophysics Data System (ADS)

    Jiang, J.; Schröter, N. B. M.; Wu, S.-C.; Kumar, N.; Shekhar, C.; Peng, H.; Xu, X.; Chen, C.; Yang, H. F.; Hwang, C.-C.; Mo, S.-K.; Felser, C.; Yan, B. H.; Liu, Z. K.; Yang, L. X.; Chen, Y. L.

    2018-02-01

    The recent discovery of the extreme magnetoresistance (XMR) in the nonmagnetic rare-earth monopnictides La X (X = P, As, Sb, Bi,), a recently proposed new topological semimetal family, has inspired intensive research effort in the exploration of the correlation between the XMR and their electronic structures. In this work, using angle-resolved photoemission spectroscopy to investigate the three-dimensional band structure of LaBi, we unraveled its topologically nontrivial nature with the observation of multiple topological surface Dirac fermions, as supported by our ab initio calculations. Furthermore, we observed substantial imbalance between the volume of electron and hole pockets, which rules out the electron-hole compensation as the primary cause of the XMR in LaBi.

  19. Computer image processing: Geologic applications

    NASA Technical Reports Server (NTRS)

    Abrams, M. J.

    1978-01-01

    Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

  20. Vanderbilt University Institute of Imaging Science Center for Computational Imaging XNAT: A multimodal data archive and processing environment.

    PubMed

    Harrigan, Robert L; Yvernault, Benjamin C; Boyd, Brian D; Damon, Stephen M; Gibney, Kyla David; Conrad, Benjamin N; Phillips, Nicholas S; Rogers, Baxter P; Gao, Yurui; Landman, Bennett A

    2016-01-01

    The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has developed a database built on XNAT housing over a quarter of a million scans. The database provides framework for (1) rapid prototyping, (2) large scale batch processing of images and (3) scalable project management. The system uses the web-based interfaces of XNAT and REDCap to allow for graphical interaction. A python middleware layer, the Distributed Automation for XNAT (DAX) package, distributes computation across the Vanderbilt Advanced Computing Center for Research and Education high performance computing center. All software are made available in open source for use in combining portable batch scripting (PBS) grids and XNAT servers. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Profiles of Motivated Self-Regulation in College Computer Science Courses: Differences in Major versus Required Non-Major Courses

    ERIC Educational Resources Information Center

    Shell, Duane F.; Soh, Leen-Kiat

    2013-01-01

    The goal of the present study was to utilize a profiling approach to understand differences in motivation and strategic self-regulation among post-secondary STEM students in major versus required non-major computer science courses. Participants were 233 students from required introductory computer science courses (194 men; 35 women; 4 unknown) at…

  2. High-performance floating-point image computing workstation for medical applications

    NASA Astrophysics Data System (ADS)

    Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin

    1990-07-01

    The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e

  3. From Image Analysis to Computer Vision: Motives, Methods, and Milestones.

    DTIC Science & Technology

    1998-07-01

    images. Initially, work on digital image analysis dealt with specific classes of images such as text, photomicrographs, nuclear particle tracks, and aerial...photographs; but by the 1960’s, general algorithms and paradigms for image analysis began to be formulated. When the artificial intelligence...scene, but eventually from image sequences obtained by a moving camera; at this stage, image analysis had become scene analysis or computer vision

  4. Computer-aided Classification of Mammographic Masses Using Visually Sensitive Image Features

    PubMed Central

    Wang, Yunzhi; Aghaei, Faranak; Zarafshani, Ali; Qiu, Yuchen; Qian, Wei; Zheng, Bin

    2017-01-01

    Purpose To develop a new computer-aided diagnosis (CAD) scheme that computes visually sensitive image features routinely used by radiologists to develop a machine learning classifier and distinguish between the malignant and benign breast masses detected from digital mammograms. Methods An image dataset including 301 breast masses was retrospectively selected. From each segmented mass region, we computed image features that mimic five categories of visually sensitive features routinely used by radiologists in reading mammograms. We then selected five optimal features in the five feature categories and applied logistic regression models for classification. A new CAD interface was also designed to show lesion segmentation, computed feature values and classification score. Results Areas under ROC curves (AUC) were 0.786±0.026 and 0.758±0.027 when to classify mass regions depicting on two view images, respectively. By fusing classification scores computed from two regions, AUC increased to 0.806±0.025. Conclusion This study demonstrated a new approach to develop CAD scheme based on 5 visually sensitive image features. Combining with a “visual aid” interface, CAD results may be much more easily explainable to the observers and increase their confidence to consider CAD generated classification results than using other conventional CAD approaches, which involve many complicated and visually insensitive texture features. PMID:27911353

  5. Computational Burden Resulting from Image Recognition of High Resolution Radar Sensors

    PubMed Central

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L.; Rufo, Elena

    2013-01-01

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation. PMID:23609804

  6. Computational burden resulting from image recognition of high resolution radar sensors.

    PubMed

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L; Rufo, Elena

    2013-04-22

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation.

  7. Missed strokes using computed tomography imaging in patients with vertigo: population-based cohort study.

    PubMed

    Grewal, Keerat; Austin, Peter C; Kapral, Moira K; Lu, Hong; Atzema, Clare L

    2015-01-01

    The purpose of this study was to determine the proportion of emergency department (ED) patients with a diagnosis of peripheral vertigo who received computed tomography (CT) head imaging in the ED and to examine whether strokes were missed using CT imaging. This population-based retrospective cohort study assessed patients who were discharged from an ED in Ontario, Canada, with a diagnosis of peripheral vertigo, April 2006 to March 2011. Patients who received CT imaging (exposed) were matched by propensity score methods to patients who did not (unexposed). If performed, CT imaging was presumed to be negative for stroke because brain stem/cerebellar stroke would result in hospitalization. We compared the incidence of stroke within 30, 90, and 365 days subsequent to ED discharge between groups, to determine whether the exposed group had a higher frequency of early strokes than the matched unexposed group. Among 41 794 qualifying patients, 8596 (20.6%) received ED head CT imaging, and 99.8% of these patients were able to be matched to a control. Among exposed patients, 25 (0.29%) were hospitalized for stroke within 30 days when compared with 11 (0.13%) among matched nonexposed patients. The relative risk of a 30- and 90-day stroke among exposed versus unexposed patients was 2.27 (95% confidence interval, 1.12-4.62) and 1.94 (95% confidence interval, 1.10-3.43), respectively. There was no difference between groups at 1 year. Strokes occurred at a median of 32.0 days (interquartile range, 4.0-33.0 days) in exposed patients, compared with 105 days (interquartile range, 11.5-204.5) in unexposed patients. One fifth of patients diagnosed with peripheral vertigo in Ontario received imaging that is not recommended in guidelines, and that imaging was associated with missed strokes. © 2014 American Heart Association, Inc.

  8. Distributed Computing Architecture for Image-Based Wavefront Sensing and 2 D FFTs

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey S.; Dean, Bruce H.; Haghani, Shadan

    2006-01-01

    Image-based wavefront sensing (WFS) provides significant advantages over interferometric-based wavefi-ont sensors such as optical design simplicity and stability. However, the image-based approach is computational intensive, and therefore, specialized high-performance computing architectures are required in applications utilizing the image-based approach. The development and testing of these high-performance computing architectures are essential to such missions as James Webb Space Telescope (JWST), Terrestial Planet Finder-Coronagraph (TPF-C and CorSpec), and Spherical Primary Optical Telescope (SPOT). The development of these specialized computing architectures require numerous two-dimensional Fourier Transforms, which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of DSPs, multiple DSP FPGAs, and an application of low-diameter graph theory. Timing results and performance analysis will be presented. The solutions offered could be applied to other all-to-all communication and scientifically computationally complex problems.

  9. Computer system for definition of the quantitative geometry of musculature from CT images.

    PubMed

    Daniel, Matej; Iglic, Ales; Kralj-Iglic, Veronika; Konvicková, Svatava

    2005-02-01

    The computer system for quantitative determination of musculoskeletal geometry from computer tomography (CT) images has been developed. The computer system processes series of CT images to obtain three-dimensional (3D) model of bony structures where the effective muscle fibres can be interactively defined. Presented computer system has flexible modular structure and is suitable also for educational purposes.

  10. Computer-based quantitative computed tomography image analysis in idiopathic pulmonary fibrosis: A mini review.

    PubMed

    Ohkubo, Hirotsugu; Nakagawa, Hiroaki; Niimi, Akio

    2018-01-01

    Idiopathic pulmonary fibrosis (IPF) is the most common type of progressive idiopathic interstitial pneumonia in adults. Many computer-based image analysis methods of chest computed tomography (CT) used in patients with IPF include the mean CT value of the whole lungs, density histogram analysis, density mask technique, and texture classification methods. Most of these methods offer good assessment of pulmonary functions, disease progression, and mortality. Each method has merits that can be used in clinical practice. One of the texture classification methods is reported to be superior to visual CT scoring by radiologist for correlation with pulmonary function and prediction of mortality. In this mini review, we summarize the current literature on computer-based CT image analysis of IPF and discuss its limitations and several future directions. Copyright © 2017 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.

  11. Ultrasound window-modulated compounding Nakagami imaging: Resolution improvement and computational acceleration for liver characterization.

    PubMed

    Ma, Hsiang-Yang; Lin, Ying-Hsiu; Wang, Chiao-Yin; Chen, Chiung-Nien; Ho, Ming-Chih; Tsui, Po-Hsiang

    2016-08-01

    Ultrasound Nakagami imaging is an attractive method for visualizing changes in envelope statistics. Window-modulated compounding (WMC) Nakagami imaging was reported to improve image smoothness. The sliding window technique is typically used for constructing ultrasound parametric and Nakagami images. Using a large window overlap ratio may improve the WMC Nakagami image resolution but reduces computational efficiency. Therefore, the objectives of this study include: (i) exploring the effects of the window overlap ratio on the resolution and smoothness of WMC Nakagami images; (ii) proposing a fast algorithm that is based on the convolution operator (FACO) to accelerate WMC Nakagami imaging. Computer simulations and preliminary clinical tests on liver fibrosis samples (n=48) were performed to validate the FACO-based WMC Nakagami imaging. The results demonstrated that the width of the autocorrelation function and the parameter distribution of the WMC Nakagami image reduce with the increase in the window overlap ratio. One-pixel shifting (i.e., sliding the window on the image data in steps of one pixel for parametric imaging) as the maximum overlap ratio significantly improves the WMC Nakagami image quality. Concurrently, the proposed FACO method combined with a computational platform that optimizes the matrix computation can accelerate WMC Nakagami imaging, allowing the detection of liver fibrosis-induced changes in envelope statistics. FACO-accelerated WMC Nakagami imaging is a new-generation Nakagami imaging technique with an improved image quality and fast computation. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Computational high-resolution optical imaging of the living human retina

    NASA Astrophysics Data System (ADS)

    Shemonski, Nathan D.; South, Fredrick A.; Liu, Yuan-Zhi; Adie, Steven G.; Scott Carney, P.; Boppart, Stephen A.

    2015-07-01

    High-resolution in vivo imaging is of great importance for the fields of biology and medicine. The introduction of hardware-based adaptive optics (HAO) has pushed the limits of optical imaging, enabling high-resolution near diffraction-limited imaging of previously unresolvable structures. In ophthalmology, when combined with optical coherence tomography, HAO has enabled a detailed three-dimensional visualization of photoreceptor distributions and individual nerve fibre bundles in the living human retina. However, the introduction of HAO hardware and supporting software adds considerable complexity and cost to an imaging system, limiting the number of researchers and medical professionals who could benefit from the technology. Here we demonstrate a fully automated computational approach that enables high-resolution in vivo ophthalmic imaging without the need for HAO. The results demonstrate that computational methods in coherent microscopy are applicable in highly dynamic living systems.

  13. Recognition and prevention of computed radiography image artifacts.

    PubMed

    Hammerstrom, Kevin; Aldrich, John; Alves, Len; Ho, Andrew

    2006-09-01

    Initiated by complaints of image artifacts, a thorough visual and radiographic investigation of 197 Fuji, 35 Agfa, and 37 Kodak computed radiography (CR) cassettes with imaging plates (IPs) in clinical use at four radiology departments was performed. The investigation revealed that the physical deterioration of the cassettes and IPs was more extensive than previously believed. It appeared that many of the image artifacts were the direct result of premature wear of the cassettes and imaging plates. The results indicate that a quality control program for CR cassettes and IPs is essential and should include not only cleaning of the cassettes and imaging plates on a regular basis, but also visual and radiographic image inspection to limit the occurrence of image artifacts and to prolong the life cycle of the CR equipment.

  14. Viking image processing. [digital stereo imagery and computer mosaicking

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1977-01-01

    The paper discusses the camera systems capable of recording black and white and color imagery developed for the Viking Lander imaging experiment. Each Viking Lander image consisted of a matrix of numbers with 512 rows and an arbitrary number of columns up to a maximum of about 9,000. Various techniques were used in the processing of the Viking Lander images, including: (1) digital geometric transformation, (2) the processing of stereo imagery to produce three-dimensional terrain maps, and (3) computer mosaicking of distinct processed images. A series of Viking Lander images is included.

  15. Advanced imaging of the macrostructure and microstructure of bone

    NASA Technical Reports Server (NTRS)

    Genant, H. K.; Gordon, C.; Jiang, Y.; Link, T. M.; Hans, D.; Majumdar, S.; Lang, T. F.

    2000-01-01

    Noninvasive and/or nondestructive techniques are capable of providing more macro- or microstructural information about bone than standard bone densitometry. Although the latter provides important information about osteoporotic fracture risk, numerous studies indicate that bone strength is only partially explained by bone mineral density. Quantitative assessment of macro- and microstructural features may improve our ability to estimate bone strength. The methods available for quantitatively assessing macrostructure include (besides conventional radiographs) quantitative computed tomography (QCT) and volumetric quantitative computed tomography (vQCT). Methods for assessing microstructure of trabecular bone noninvasively and/or nondestructively include high-resolution computed tomography (hrCT), micro-computed tomography (muCT), high-resolution magnetic resonance (hrMR), and micromagnetic resonance (muMR). vQCT, hrCT and hrMR are generally applicable in vivo; muCT and muMR are principally applicable in vitro. Although considerable progress has been made in the noninvasive and/or nondestructive imaging of the macro- and microstructure of bone, considerable challenges and dilemmas remain. From a technical perspective, the balance between spatial resolution versus sampling size, or between signal-to-noise versus radiation dose or acquisition time, needs further consideration, as do the trade-offs between the complexity and expense of equipment and the availability and accessibility of the methods. The relative merits of in vitro imaging and its ultrahigh resolution but invasiveness versus those of in vivo imaging and its modest resolution but noninvasiveness also deserve careful attention. From a clinical perspective, the challenges for bone imaging include balancing the relative advantages of simple bone densitometry against the more complex architectural features of bone or, similarly, the deeper research requirements against the broader clinical needs. The

  16. Computer image generation: Reconfigurability as a strategy in high fidelity space applications

    NASA Technical Reports Server (NTRS)

    Bartholomew, Michael J.

    1989-01-01

    The demand for realistic, high fidelity, computer image generation systems to support space simulation is well established. However, as the number and diversity of space applications increase, the complexity and cost of computer image generation systems also increase. One strategy used to harmonize cost with varied requirements is establishment of a reconfigurable image generation system that can be adapted rapidly and easily to meet new and changing requirements. The reconfigurability strategy through the life cycle of system conception, specification, design, implementation, operation, and support for high fidelity computer image generation systems are discussed. The discussion is limited to those issues directly associated with reconfigurability and adaptability of a specialized scene generation system in a multi-faceted space applications environment. Examples and insights gained through the recent development and installation of the Improved Multi-function Scene Generation System at Johnson Space Center, Systems Engineering Simulator are reviewed and compared with current simulator industry practices. The results are clear; the strategy of reconfigurability applied to space simulation requirements provides a viable path to supporting diverse applications with an adaptable computer image generation system.

  17. Comparison of computed tomography and magnetic resonance imaging for the evaluation of canine intranasal neoplasia.

    PubMed

    Drees, R; Forrest, L J; Chappell, R

    2009-07-01

    Canine intranasal neoplasia is commonly evaluated using computed tomography to indicate the diagnosis, to determine disease extent, to guide histological sampling location and to plan treatment. With the expanding use of magnetic resonance imaging in veterinary medicine, this modality has been recently applied for the same purpose. The aim of this study was to compare the features of canine intranasal neoplasia using computed tomography and magnetic resonance imaging. Twenty-one dogs with confirmed intranasal neoplasia underwent both computed tomography and magnetic resonance imaging. The images were reviewed retrospectively for the bony and soft tissue features of intranasal neoplasia. Overall computed tomography and magnetic resonance imaging performed very similarly. However, lysis of bones bordering the nasal cavity and mucosal thickening was found on computed tomography images more often than on magnetic resonance images. Small amounts of fluid in the nasal cavity were more often seen on magnetic resonance images. However, fluid in the frontal sinuses was seen equally well with both modalities. We conclude that computed tomography is satisfactory for evaluation of canine intranasal neoplasia, and no clinically relevant benefit is gained using magnetic resonance imaging for intranasal neoplasia without extent into the cranial cavity.

  18. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers

    PubMed Central

    Filipovic, Nenad D.

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration. PMID:28611851

  19. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers.

    PubMed

    Milankovic, Ivan L; Mijailovic, Nikola V; Filipovic, Nenad D; Peulic, Aleksandar S

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration.

  20. Seismic imaging using finite-differences and parallel computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ober, C.C.

    1997-12-31

    A key to reducing the risks and costs of associated with oil and gas exploration is the fast, accurate imaging of complex geologies, such as salt domes in the Gulf of Mexico and overthrust regions in US onshore regions. Prestack depth migration generally yields the most accurate images, and one approach to this is to solve the scalar wave equation using finite differences. As part of an ongoing ACTI project funded by the US Department of Energy, a finite difference, 3-D prestack, depth migration code has been developed. The goal of this work is to demonstrate that massively parallel computersmore » can be used efficiently for seismic imaging, and that sufficient computing power exists (or soon will exist) to make finite difference, prestack, depth migration practical for oil and gas exploration. Several problems had to be addressed to get an efficient code for the Intel Paragon. These include efficient I/O, efficient parallel tridiagonal solves, and high single-node performance. Furthermore, to provide portable code the author has been restricted to the use of high-level programming languages (C and Fortran) and interprocessor communications using MPI. He has been using the SUNMOS operating system, which has affected many of his programming decisions. He will present images created from two verification datasets (the Marmousi Model and the SEG/EAEG 3D Salt Model). Also, he will show recent images from real datasets, and point out locations of improved imaging. Finally, he will discuss areas of current research which will hopefully improve the image quality and reduce computational costs.« less

  1. PROSPECTIVE COMPARISON OF TUMOR STAGING USING COMPUTED TOMOGRAPHY VERSUS MAGNETIC RESONANCE IMAGING FINDINGS IN DOGS WITH NASAL NEOPLASIA: A PILOT STUDY.

    PubMed

    Lux, Cassie N; Culp, William T N; Johnson, Lynelle R; Kent, Michael; Mayhew, Philipp; Daniaux, Lise A; Carr, Alaina; Puchalski, Sarah

    2017-05-01

    Identification of nasal neoplasia extension and tumor staging in dogs is most commonly performed using computed tomography (CT), however magnetic resonance imaging (MRI) is routinely used in human medicine. A prospective pilot study enrolling six dogs with nasal neoplasia was performed with CT and MRI studies acquired under the same anesthetic episode. Interobserver comparison and comparison between the two imaging modalities with regard to bidimensional measurements of the nasal tumors, tumor staging using historical schemes, and assignment of an ordinal scale of tumor margin clarity at the tumor-soft tissue interface were performed. The hypotheses included that MRI would have greater tumor measurements, result in higher tumor staging, and more clearly define the tumor soft tissue interface when compared to CT. Evaluation of bone involvement of the nasal cavity and head showed a high level of agreement between CT and MRI. Estimation of tumor volume using bidimensional measurements was higher on MRI imaging in 5/6 dogs, and resulted in a median tumor volume which was 18.4% higher than CT imaging. Disagreement between CT and MRI was noted with meningeal enhancement, in which two dogs were positive for meningeal enhancement on MRI and negative on CT. One of six dogs had a higher tumor stage on MRI compared to CT, while the remaining five agreed. Magnetic resonance imaging resulted in larger bidimensional measurements and tumor volume estimates, along with a higher likelihood of identifying meningeal enhancement when compared to CT imaging. Magnetic resonance imaging may provide integral information for tumor staging, prognosis, and treatment planning. © 2017 American College of Veterinary Radiology.

  2. A database of body-only computer-generated pictures of women for body-image studies: Development and preliminary validation.

    PubMed

    Moussally, Joanna M; Rochat, Lucien; Posada, Andrés; Van der Linden, Martial

    2017-02-01

    The body-shape-related stimuli used in most body-image studies have several limitations (e.g., a lack of pilot validation procedures and the use of non-body-shape-related control/neutral stimuli). We therefore developed a database of 61 computer-generated body-only pictures of women, wherein bodies were methodically manipulated in terms of fatness versus thinness. Eighty-two young women assessed the pictures' attractiveness, beauty, harmony (valence ratings), and body shape (assessed on a thinness/fatness axis), providing normative data for valence and body shape ratings. First, stimuli manipulated for fatness versus thinness conveyed comparable emotional intensities regarding the valence and body shape ratings. Second, different subcategories of stimuli were obtained on the basis of variations in body shape and valence judgments. Fat and thin bodies were distributed into several subcategories depending on their valence ratings, and a subcategory containing stimuli that were neutral in terms of valence and body shape was identified. Interestingly, at a descriptive level, the thinness/fatness manipulations of the bodies were in a curvilinear relationship with the valence ratings: Thin bodies were not only judged as positive, but also as negative when their estimated body mass indexes (BMIs) decreased too much. Finally, convergent validity was assessed by exploring the impacts of body-image-related variables (BMI, thin-ideal internalization, and body dissatisfaction) on participants' judgments of the bodies. Valence judgments, but not body shape judgments, were influenced by the participants' levels of thin-ideal internalization and body dissatisfaction. Participants' BMIs did not significantly influence their judgments. Given these findings, this database contains relevant material that can be used in various fields, primarily for studies of body-image disturbance or eating disorders.

  3. Photo-reconnaissance applications of computer processing of images.

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1972-01-01

    Discussion of imaging processing techniques for enhancement and calibration of Jet Propulsion Laboratory imaging experiment pictures returned from NASA space vehicles such as Ranger, Mariner and Surveyor. Particular attention is given to data transmission, resolution vs recognition, and color aspects of digital data processing. The effectiveness of these techniques in applications to images from a wide variety of sources is noted. It is anticipated that the use of computer processing for enhancement of imagery will increase with the improvement and cost reduction of these techniques in the future.

  4. Three-Dimensional Cone Beam Computed Tomography Volumetric Outcomes of rhBMP-2/Demineralized Bone Matrix versus Iliac Crest Bone Graft for Alveolar Cleft Reconstruction.

    PubMed

    Liang, Fan; Yen, Stephen L-K; Imahiyerobo, Thomas; Sanborn, Luke; Yen, Leia; Yen, Daniel; Nazarian, Sheila; Jedrzejewski, Breanna; Urata, Mark; Hammoudeh, Jeffrey

    2017-10-01

    Recent studies indicate that recombinant human bone morphogenetic protein-2 (rhBMP-2) in a demineralized bone matrix scaffold is a comparable alternative to iliac bone autograft in the setting of secondary alveolar cleft repair. Postreconstruction occlusal radiographs demonstrate improved bone stock when rhBMP-2/demineralized bone matrix (DBM) scaffold is used but lack the capacity to evaluate bone growth in three dimensions. This study uses cone beam computed tomography to provide the first clinical evaluation of volumetric and density comparisons between these two treatment modalities. A prospective study was conducted with 31 patients and 36 repairs of the alveolar cleft over a 2-year period. Twenty-one repairs used rhBMP-2/DBM scaffold and 14 repairs used iliac bone grafting. Postoperatively, occlusal radiographs were obtained at 3 months to evaluate bone fill; cone beam computed tomographic images were obtained at 6 to 9 months to compare volumetric and density data. At 3 months, postoperative occlusal radiographs demonstrated that 67 percent of patients receiving rhBMP-2/DBM scaffold had complete bone fill of the alveolus, versus 56 percent of patients in the autologous group. In contrast, cone beam computed tomographic data showed 31.6 percent (95 percent CI, 24.2 to 38.5 percent) fill in the rhBMP-2 group compared with 32.5 percent (95 percent CI, 22.1 to 42.9 percent) in the autologous population. Density analysis demonstrated identical average values between the groups (1.38 g/cc). These data demonstrate comparable bone regrowth and density values following secondary alveolar cleft repair using rhBMP-2/DBM scaffold versus autologous iliac bone graft. Cone beam computed tomography provides a more nuanced understanding of true bone regeneration within the alveolar cleft that may contribute to the information provided by occlusal radiographs alone. Therapeutic, II.

  5. End-user satisfaction of a patient education tool manual versus computer-generated tool.

    PubMed

    Tronni, C; Welebob, E

    1996-01-01

    This article reports a nonexperimental comparative study of end-user satisfaction before and after implementation of a vendor supplied computerized system (Micromedex, Inc) for providing up-to-date patient instructions regarding diseases, injuries, procedures, and medications. The purpose of this research was to measure the satisfaction of nurses who directly interact with a specific patient educational software application and to compare user satisfaction with manual versus computer generated materials. A computing satisfaction questionnaire that uses a scale of 1 to 5 (1 being the lowest) was used to measure end-user computing satisfaction in five constructs: content, accuracy, format, ease of use, and timeliness. Summary statistics were used to calculate mean ratings for each of the questionnaire's 12 items and for each of the five constructs. Mean differences between the ratings before and after implementation of the five constructs were significant by paired t test. Total user satisfaction improved with the computerized system, and the computer generated materials were given a higher rating than were the manual materials. Implications of these findings are discussed.

  6. Comparison of computed tomography and magnetic resonance imaging for the evaluation of canine intranasal neoplasia

    PubMed Central

    Drees, R.; Forrest, L. J.; Chappell, R.

    2009-01-01

    Objectives Canine intranasal neoplasia is commonly evaluated using computed tomography to indicate the diagnosis, to determine disease extent, to guide histological sampling location and to plan treatment. With the expanding use of magnetic resonance imaging in veterinary medicine, this modality has been recently applied for the same purpose. The aim of this study was to compare the features of canine intranasal neoplasia using computed tomography and magnetic resonance imaging. Methods Twenty-one dogs with confirmed intranasal neoplasia underwent both computed tomography and magnetic resonance imaging. The images were reviewed retrospectively for the bony and soft tissue features of intranasal neoplasia. Results Overall computed tomography and magnetic resonance imaging performed very similarly. However, lysis of bones bordering the nasal cavity and mucosal thickening was found on computed tomography images more often than on magnetic resonance images. Small amounts of fluid in the nasal cavity were more often seen on magnetic resonance images. However, fluid in the frontal sinuses was seen equally well with both modalities. Clinical Significance We conclude that computed tomography is satisfactory for evaluation of canine intranasal neoplasia, and no clinically relevant benefit is gained using magnetic resonance imaging for intranasal neoplasia without extent into the cranial cavity. PMID:19508490

  7. Computer vision research with new imaging technology

    NASA Astrophysics Data System (ADS)

    Hou, Guangqi; Liu, Fei; Sun, Zhenan

    2015-12-01

    Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.

  8. Computing volume potentials for noninvasive imaging of cardiac excitation.

    PubMed

    van der Graaf, A W Maurits; Bhagirath, Pranav; van Driel, Vincent J H M; Ramanna, Hemanth; de Hooge, Jacques; de Groot, Natasja M S; Götte, Marco J W

    2015-03-01

    In noninvasive imaging of cardiac excitation, the use of body surface potentials (BSP) rather than body volume potentials (BVP) has been favored due to enhanced computational efficiency and reduced modeling effort. Nowadays, increased computational power and the availability of open source software enable the calculation of BVP for clinical purposes. In order to illustrate the possible advantages of this approach, the explanatory power of BVP is investigated using a rectangular tank filled with an electrolytic conductor and a patient specific three dimensional model. MRI images of the tank and of a patient were obtained in three orthogonal directions using a turbo spin echo MRI sequence. MRI images were segmented in three dimensional using custom written software. Gmsh software was used for mesh generation. BVP were computed using a transfer matrix and FEniCS software. The solution for 240,000 nodes, corresponding to a resolution of 5 mm throughout the thorax volume, was computed in 3 minutes. The tank experiment revealed that an increased electrode surface renders the position of the 4 V equipotential plane insensitive to mesh cell size and reduces simulated deviations. In the patient-specific model, the impact of assigning a different conductivity to lung tissue on the distribution of volume potentials could be visualized. Generation of high quality volume meshes and computation of BVP with a resolution of 5 mm is feasible using generally available software and hardware. Estimation of BVP may lead to an improved understanding of the genesis of BSP and sources of local inaccuracies. © 2014 Wiley Periodicals, Inc.

  9. COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    EPA Science Inventory



    COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    T Martonen1 and J Schroeter2

    1Experimental Toxicology Division, National Health and Environmental Effects Research Laboratory, U.S. EPA, Research Triangle Park, NC 27711 USA and 2Curriculum in Toxicology, Unive...

  10. Prior image constrained image reconstruction in emerging computed tomography applications

    NASA Astrophysics Data System (ADS)

    Brunner, Stephen T.

    Advances have been made in computed tomography (CT), especially in the past five years, by incorporating prior images into the image reconstruction process. In this dissertation, we investigate prior image constrained image reconstruction in three emerging CT applications: dual-energy CT, multi-energy photon-counting CT, and cone-beam CT in image-guided radiation therapy. First, we investigate the application of Prior Image Constrained Compressed Sensing (PICCS) in dual-energy CT, which has been called "one of the hottest research areas in CT." Phantom and animal studies are conducted using a state-of-the-art 64-slice GE Discovery 750 HD CT scanner to investigate the extent to which PICCS can enable radiation dose reduction in material density and virtual monochromatic imaging. Second, we extend the application of PICCS from dual-energy CT to multi-energy photon-counting CT, which has been called "one of the 12 topics in CT to be critical in the next decade." Numerical simulations are conducted to generate multiple energy bin images for a photon-counting CT acquisition and to investigate the extent to which PICCS can enable radiation dose efficiency improvement. Third, we investigate the performance of a newly proposed prior image constrained scatter correction technique to correct scatter-induced shading artifacts in cone-beam CT, which, when used in image-guided radiation therapy procedures, can assist in patient localization, and potentially, dose verification and adaptive radiation therapy. Phantom studies are conducted using a Varian 2100 EX system with an on-board imager to investigate the extent to which the prior image constrained scatter correction technique can mitigate scatter-induced shading artifacts in cone-beam CT. Results show that these prior image constrained image reconstruction techniques can reduce radiation dose in dual-energy CT by 50% in phantom and animal studies in material density and virtual monochromatic imaging, can lead to radiation

  11. A mathematical model for computer image tracking.

    PubMed

    Legters, G R; Young, T Y

    1982-06-01

    A mathematical model using an operator formulation for a moving object in a sequence of images is presented. Time-varying translation and rotation operators are derived to describe the motion. A variational estimation algorithm is developed to track the dynamic parameters of the operators. The occlusion problem is alleviated by using a predictive Kalman filter to keep the tracking on course during severe occlusion. The tracking algorithm (variational estimation in conjunction with Kalman filter) is implemented to track moving objects with occasional occlusion in computer-simulated binary images.

  12. Incidental Findings in Abdominal Dual-Energy Computed Tomography: Correlation Between True Noncontrast and Virtual Noncontrast Images Considering Renal and Liver Cysts and Adrenal Masses.

    PubMed

    Slebocki, Karin; Kraus, Bastian; Chang, De-Hua; Hellmich, Martin; Maintz, David; Bangard, Christopher

    To assess correlation between attenuation measurements of incidental findings in abdominal second generation dual-energy computed tomography (CT) on true noncontrast (TNC) and virtual noncontrast (VNC) images. Sixty-three patients underwent arterial dual-energy CT (Somatom Definition Flash, Siemens; pitch factor, 0.75-1.0; gantry rotation time, 0.28 seconds) after endovascular aneurysm repair, consisting of a TNC single energy CT scan (collimation, 128 × 0.6 mm; 120 kVp) and a dual-energy arterial phase scan (collimation, 32 × 0.6 mm, 140 and 100 kVp; blended, 120 kVp data set). Attenuation measurements in Hounsfield units (HU) of liver parenchyma and incidental findings like renal and hepatic cysts and adrenal masses on TNC and VNC images were done by drawing regions of interest. Statistical analysis was performed by paired t test and Pearson correlation. Incidental findings were detected in 56 (89%) patients. There was excellent correlation for both renal (n = 40) and hepatic cysts (n = 12) as well as adrenal masses (n = 6) with a Pearson correlation of 0.896, 0.800, and 0.945, respectively, and mean attenuation values on TNC and VNC images of 10.6 HU ± 12.8 versus 5.1 HU ± 17.5 (attenuation value range from -8.8 to 59.1 HU vs -11.8 to 73.4 HU), 6.4 HU ± 5.8 versus 6.3 HU ± 4.6 (attenuation value range from 2.0 to 16.2 HU vs -3.0 to 15.9 HU), and 12.8 HU ± 11.2 versus 12.4 HU ± 10.2 (attenuation value range from -2.3 to 27.5 HU vs -2.2 to 23.6 HU), respectively. As proof of principle, liver parenchyma measurements also showed excellent correlation between TNC and VNC (n = 40) images with a Pearson correlation of 0.839 and mean attenuation values on TNC and VNC images of 47.2 HU ± 10.5 versus 43.8 HU ± 8.7 (attenuation value range from 21.9 to 60.2 HU vs 4.5 to 65.3 HU). In conclusion, attenuation measurements of incidental findings like renal cysts or adrenal masses on TNC and VNC images derived from second generation dual-energy CT scans show excellent

  13. High performance computing for deformable image registration: towards a new paradigm in adaptive radiotherapy.

    PubMed

    Samant, Sanjiv S; Xia, Junyi; Muyan-Ozcelik, Pinar; Owens, John D

    2008-08-01

    The advent of readily available temporal imaging or time series volumetric (4D) imaging has become an indispensable component of treatment planning and adaptive radiotherapy (ART) at many radiotherapy centers. Deformable image registration (DIR) is also used in other areas of medical imaging, including motion corrected image reconstruction. Due to long computation time, clinical applications of DIR in radiation therapy and elsewhere have been limited and consequently relegated to offline analysis. With the recent advances in hardware and software, graphics processing unit (GPU) based computing is an emerging technology for general purpose computation, including DIR, and is suitable for highly parallelized computing. However, traditional general purpose computation on the GPU is limited because the constraints of the available programming platforms. As well, compared to CPU programming, the GPU currently has reduced dedicated processor memory, which can limit the useful working data set for parallelized processing. We present an implementation of the demons algorithm using the NVIDIA 8800 GTX GPU and the new CUDA programming language. The GPU performance will be compared with single threading and multithreading CPU implementations on an Intel dual core 2.4 GHz CPU using the C programming language. CUDA provides a C-like language programming interface, and allows for direct access to the highly parallel compute units in the GPU. Comparisons for volumetric clinical lung images acquired using 4DCT were carried out. Computation time for 100 iterations in the range of 1.8-13.5 s was observed for the GPU with image size ranging from 2.0 x 10(6) to 14.2 x 10(6) pixels. The GPU registration was 55-61 times faster than the CPU for the single threading implementation, and 34-39 times faster for the multithreading implementation. For CPU based computing, the computational time generally has a linear dependence on image size for medical imaging data. Computational efficiency is

  14. Resolution Versus Error for Computational Electron Microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luzi, Lorenzo; Stevens, Andrew; Yang, Hao

    misleading. For this reason, the BPFA has no overlap to avoid excessive smoothing. Moreover, the resolution of the simulated image is approximately 9.2 (1/nm), so we only look that far in the frequency domain when performing FRC. If the FRC curve does not crossover the threshold, a resolution value of 9.2 is used. We emphasize that our reported results are conservative. The FRC and PSNR values using the ground truth and the reconstructed images are shown in Tables 1 and 2. The left side show the metrics without using BPFA (missing pixels) and the right side show the metrics after using BPFA. When we did not use BPFA, the Fourier transform was estimated [4]. Some threshold curves have been studied [5], but they are derived for additive noise models. Since we have a Poisson noise model, we have used the more conservative threshold of 0.5 for our calculations. Ten images were used to construct each cell of tables in the form of the mean of the metric plus or minus its standard deviation. As expected, the PSNR dies off much quicker than the FRC values for the same image. For the 100% and 80% sampled versions of the truth image, the resolution only dies off when the dose is 5. However, the PSNR dies off rapidly as the dose is reduced. For the 1000, 500, and 50 dose images, the FRC is the maximum, or close, until we undersample at 20%. The PSNR for these values tapers down as we get into the bottom right hand corner of the table, even though the resolution remains high. Overall, we find that undersampled images can be reconstructed to acceptable resolution even when the dose per pixel is also reduced[6]. References: [1]A Stevens, H Yang, L Carin et al. Microscopy 63(1), (2014), pp. 41. [2]A Stevens, L Kovarik, P Abellan et al. Advanced Structural and Chemical Imaging 1(1), (2015), pp. 1. [3]M Zhou, H Chen, J Paisley et al. Image Processing, IEEE Transactions on 21(1), (2012), pp. 130. [4]V. Y. Liepin’sh. Automatic control and computer sciences 30(3), (1996), pp. 20.« less

  15. Computed inverse resonance imaging for magnetic susceptibility map reconstruction.

    PubMed

    Chen, Zikuan; Calhoun, Vince

    2012-01-01

    This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≠0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.

  16. Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory

    NASA Astrophysics Data System (ADS)

    Dichter, W.; Doris, K.; Conkling, C.

    1982-06-01

    A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.

  17. Observer Evaluation of a Metal Artifact Reduction Algorithm Applied to Head and Neck Cone Beam Computed Tomographic Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korpics, Mark; Surucu, Murat; Mescioglu, Ibrahim

    Purpose and Objectives: To quantify, through an observer study, the reduction in metal artifacts on cone beam computed tomographic (CBCT) images using a projection-interpolation algorithm, on images containing metal artifacts from dental fillings and implants in patients treated for head and neck (H&N) cancer. Methods and Materials: An interpolation-substitution algorithm was applied to H&N CBCT images containing metal artifacts from dental fillings and implants. Image quality with respect to metal artifacts was evaluated subjectively and objectively. First, 6 independent radiation oncologists were asked to rank randomly sorted blinded images (before and after metal artifact reduction) using a 5-point rating scalemore » (1 = severe artifacts; 5 = no artifacts). Second, the standard deviation of different regions of interest (ROI) within each image was calculated and compared with the mean rating scores. Results: The interpolation-substitution technique successfully reduced metal artifacts in 70% of the cases. From a total of 60 images from 15 H&N cancer patients undergoing image guided radiation therapy, the mean rating score on the uncorrected images was 2.3 ± 1.1, versus 3.3 ± 1.0 for the corrected images. The mean difference in ranking score between uncorrected and corrected images was 1.0 (95% confidence interval: 0.9-1.2, P<.05). The standard deviation of each ROI significantly decreased after artifact reduction (P<.01). Moreover, a negative correlation between the mean rating score for each image and the standard deviation of the oral cavity and bilateral cheeks was observed. Conclusion: The interpolation-substitution algorithm is efficient and effective for reducing metal artifacts caused by dental fillings and implants on CBCT images, as demonstrated by the statistically significant increase in observer image quality ranking and by the decrease in ROI standard deviation between uncorrected and corrected images.« less

  18. Imaging diffusive media using time-independent and time-harmonic sources: dependence of image quality on imaging algorithms, target volume, weight matrix, and view angles

    NASA Astrophysics Data System (ADS)

    Chang, Jenghwa; Aronson, Raphael; Graber, Harry L.; Barbour, Randall L.

    1995-05-01

    We present results examining the dependence of image quality for imaging in dense scattering media as influenced by the choice of parameters pertaining to the physical measurement and factors influencing the efficiency of the computation. The former includes the density of the weight matrix as affected by the target volume, view angle, and source condition. The latter includes the density of the weight matrix and type of algorithm used. These were examined by solving a one-step linear perturbation equation derived from the transport equation using three different algorithms: POCS, CGD, and SART algorithms with contraints. THe above were explored by evaluating four different 3D cylindrical phantom media: a homogeneous medium, an media containing a single black rod on the axis, a single black rod parallel to the axis, and thirteen black rods arrayed in the shape of an 'X'. Solutions to the forward problem were computed using Monte Carlo methods for an impulse source, from which was calculated time- independent and time harmonic detector responses. The influence of target volume on image quality and computational efficiency was studied by computing solution to three types of reconstructions: 1) 3D reconstruction, which considered each voxel individually, 2) 2D reconstruction, which assumed that symmetry along the cylinder axis was know a proiri, 3) 2D limited reconstruction, which assumed that only those voxels in the plane of the detectors contribute information to the detecot readings. The effect of view angle was explored by comparing computed images obtained from a single source, whose position was varied, as well as for the type of tomographic measurement scheme used (i.e., radial scan versus transaxial scan). The former condition was also examined for the dependence of the above on choice of source condition [ i.e., cw (2D reconstructions) versus time-harmonic (2D limited reconstructions) source]. The efficiency of the computational effort was explored

  19. High-definition multidetector computed tomography for evaluation of coronary artery stents: comparison to standard-definition 64-detector row computed tomography.

    PubMed

    Min, James K; Swaminathan, Rajesh V; Vass, Melissa; Gallagher, Scott; Weinsaft, Jonathan W

    2009-01-01

    The assessment of coronary stents with present-generation 64-detector row computed tomography scanners that use filtered backprojection and operating at standard definition of 0.5-0.75 mm (standard definition, SDCT) is limited by imaging artifacts and noise. We evaluated the performance of a novel, high-definition 64-slice CT scanner (HDCT), with improved spatial resolution (0.23 mm) and applied statistical iterative reconstruction (ASIR) for evaluation of coronary artery stents. HDCT and SDCT stent imaging was performed with the use of an ex vivo phantom. HDCT was compared with SDCT with both smooth and sharp kernels for stent intraluminal diameter, intraluminal area, and image noise. Intrastent visualization was assessed with an ASIR algorithm on HDCT scans, compared with the filtered backprojection algorithms by SDCT. Six coronary stents (2.5, 2.5, 2.75, 3.0, 3.5, 4.0mm) were analyzed by 2 independent readers. Interobserver correlation was high for both HDCT and SDCT. HDCT yielded substantially larger luminal area visualization compared with SDCT, both for smooth (29.4+/-14.5 versus 20.1+/-13.0; P<0.001) and sharp (32.0+/-15.2 versus 25.5+/-12.0; P<0.001) kernels. Stent diameter was higher with HDCT compared with SDCT, for both smooth (1.54+/-0.59 versus1.00+/-0.50; P<0.0001) and detailed (1.47+/-0.65 versus 1.08+/-0.54; P<0.0001) kernels. With detailed kernels, HDCT scans that used algorithms showed a trend toward decreased image noise compared with SDCT-filtered backprojection algorithms. On the basis of this ex vivo study, HDCT provides superior detection of intrastent luminal area and diameter visualization, compared with SDCT. ASIR image reconstruction techniques for HDCT scans enhance the in-stent assessment while decreasing image noise.

  20. Comparison of magnetic resonance imaging and computed tomography in suspected lesions in the posterior cranial fossa.

    PubMed Central

    Teasdale, G. M.; Hadley, D. M.; Lawrence, A.; Bone, I.; Burton, H.; Grant, R.; Condon, B.; Macpherson, P.; Rowan, J.

    1989-01-01

    OBJECTIVE--To compare computed tomography and magnetic resonance imaging in investigating patients suspected of having a lesion in the posterior cranial fossa. DESIGN--Randomised allocation of newly referred patients to undergo either computed tomography or magnetic resonance imaging; the alternative investigation was performed subsequently only in response to a request from the referring doctor. SETTING--A regional neuroscience centre serving 2.7 million. PATIENTS--1020 Patients recruited between April 1986 and December 1987, all suspected by neurologists, neurosurgeons, or other specialists of having a lesion in the posterior fossa and referred for neuroradiology. The groups allocated to undergo computed tomography or magnetic resonance imaging were well matched in distributions of age, sex, specialty of referring doctor, investigation as an inpatient or an outpatient, suspected site of lesion, and presumed disease process; the referring doctor's confidence in the initial clinical diagnosis was also similar. INTERVENTIONS--After the patients had been imaged by either computed tomography or magnetic resonance (using a resistive magnet of 0.15 T) doctors were given the radiologist's report and a form asking if they considered that imaging with the alternative technique was necessary and, if so, why; it also asked for their current diagnoses and their confidence in them. MAIN OUTCOME MEASURES--Number of requests for the alternative method of investigation. Assessment of characteristics of patients for whom further imaging was requested and lesions that were suspected initially and how the results of the second imaging affected clinicians' and radiologists' opinions. RESULTS--Ninety three of the 501 patients who initially underwent computed tomography were referred subsequently for magnetic resonance imaging whereas only 28 of the 493 patients who initially underwent magnetic resonance imaging were referred subsequently for computed tomography. Over the study the

  1. High Speed Computational Ghost Imaging via Spatial Sweeping

    NASA Astrophysics Data System (ADS)

    Wang, Yuwang; Liu, Yang; Suo, Jinli; Situ, Guohai; Qiao, Chang; Dai, Qionghai

    2017-03-01

    Computational ghost imaging (CGI) achieves single-pixel imaging by using a Spatial Light Modulator (SLM) to generate structured illuminations for spatially resolved information encoding. The imaging speed of CGI is limited by the modulation frequency of available SLMs, and sets back its practical applications. This paper proposes to bypass this limitation by trading off SLM’s redundant spatial resolution for multiplication of the modulation frequency. Specifically, a pair of galvanic mirrors sweeping across the high resolution SLM multiply the modulation frequency within the spatial resolution gap between SLM and the final reconstruction. A proof-of-principle setup with two middle end galvanic mirrors achieves ghost imaging as fast as 42 Hz at 80 × 80-pixel resolution, 5 times faster than state-of-the-arts, and holds potential for one magnitude further multiplication by hardware upgrading. Our approach brings a significant improvement in the imaging speed of ghost imaging and pushes ghost imaging towards practical applications.

  2. Nanoparticle imaging probes for molecular imaging with computed tomography and application to cancer imaging

    NASA Astrophysics Data System (ADS)

    Roeder, Ryan K.; Curtis, Tyler E.; Nallathamby, Prakash D.; Irimata, Lisa E.; McGinnity, Tracie L.; Cole, Lisa E.; Vargo-Gogola, Tracy; Cowden Dahl, Karen D.

    2017-03-01

    Precision imaging is needed to realize precision medicine in cancer detection and treatment. Molecular imaging offers the ability to target and identify tumors, associated abnormalities, and specific cell populations with overexpressed receptors. Nuclear imaging and radionuclide probes provide high sensitivity but subject the patient to a high radiation dose and provide limited spatiotemporal information, requiring combined computed tomography (CT) for anatomic imaging. Therefore, nanoparticle contrast agents have been designed to enable molecular imaging and improve detection in CT alone. Core-shell nanoparticles provide a powerful platform for designing tailored imaging probes. The composition of the core is chosen for enabling strong X-ray contrast, multi-agent imaging with photon-counting spectral CT, and multimodal imaging. A silica shell is used for protective, biocompatible encapsulation of the core composition, volume-loading fluorophores or radionuclides for multimodal imaging, and facile surface functionalization with antibodies or small molecules for targeted delivery. Multi-agent (k-edge) imaging and quantitative molecular imaging with spectral CT was demonstrated using current clinical agents (iodine and BaSO4) and a proposed spectral library of contrast agents (Gd2O3, HfO2, and Au). Bisphosphonate-functionalized Au nanoparticles were demonstrated to enhance sensitivity and specificity for the detection of breast microcalcifications by conventional radiography and CT in both normal and dense mammary tissue using murine models. Moreover, photon-counting spectral CT enabled quantitative material decomposition of the Au and calcium signals. Immunoconjugated Au@SiO2 nanoparticles enabled highly-specific targeting of CD133+ ovarian cancer stem cells for contrast-enhanced detection in model tumors.

  3. Computer-assisted versus oral-and-written dietary history taking for diabetes mellitus.

    PubMed

    Wei, Igor; Pappas, Yannis; Car, Josip; Sheikh, Aziz; Majeed, Azeem

    2011-12-07

    Diabetes is a chronic illness characterised by insulin resistance or deficiency, resulting in elevated glycosylated haemoglobin A1c (HbA1c) levels. Diet and adherence to dietary advice is associated with lower HbA1c levels and control of disease. Dietary history may be an effective clinical tool for diabetes management and has traditionally been taken by oral-and-written methods, although it can also be collected using computer-assisted history taking systems (CAHTS). Although CAHTS were first described in the 1960s, there remains uncertainty about the impact of these methods on dietary history collection, clinical care and patient outcomes such as quality of life.  To assess the effects of computer-assisted versus oral-and-written dietary history taking on patient outcomes for diabetes mellitus. We searched The Cochrane Library (issue 6, 2011), MEDLINE (January 1985 to June 2011), EMBASE (January 1980 to June 2011) and CINAHL (January 1981 to June 2011). Reference lists of obtained articles were also pursued further and no limits were imposed on languages and publication status. Randomised controlled trials of computer-assisted versus oral-and-written history taking in patients with diabetes mellitus. Two authors independently scanned the title and abstract of retrieved articles. Potentially relevant articles were investigated as full text. Studies that met the inclusion criteria were abstracted for relevant population and intervention characteristics with any disagreements resolved by discussion, or by a third party. Risk of bias was similarly assessed independently. Of the 2991 studies retrieved, only one study with 38 study participants compared the two methods of history taking over a total of eight weeks. The authors found that as patients became increasingly familiar with using CAHTS, the correlation between patients' food records and computer assessments improved. Reported fat intake decreased in the control group and increased when queried by the computer

  4. Solar physics applications of computer graphics and image processing

    NASA Technical Reports Server (NTRS)

    Altschuler, M. D.

    1985-01-01

    Computer graphics devices coupled with computers and carefully developed software provide new opportunities to achieve insight into the geometry and time evolution of scalar, vector, and tensor fields and to extract more information quickly and cheaply from the same image data. Two or more different fields which overlay in space can be calculated from the data (and the physics), then displayed from any perspective, and compared visually. The maximum regions of one field can be compared with the gradients of another. Time changing fields can also be compared. Images can be added, subtracted, transformed, noise filtered, frequency filtered, contrast enhanced, color coded, enlarged, compressed, parameterized, and histogrammed, in whole or section by section. Today it is possible to process multiple digital images to reveal spatial and temporal correlations and cross correlations. Data from different observatories taken at different times can be processed, interpolated, and transformed to a common coordinate system.

  5. Digital image processing using parallel computing based on CUDA technology

    NASA Astrophysics Data System (ADS)

    Skirnevskiy, I. P.; Pustovit, A. V.; Abdrashitova, M. O.

    2017-01-01

    This article describes expediency of using a graphics processing unit (GPU) in big data processing in the context of digital images processing. It provides a short description of a parallel computing technology and its usage in different areas, definition of the image noise and a brief overview of some noise removal algorithms. It also describes some basic requirements that should be met by certain noise removal algorithm in the projection to computer tomography. It provides comparison of the performance with and without using GPU as well as with different percentage of using CPU and GPU.

  6. Computed tomography image-guided surgery in complex acetabular fractures.

    PubMed

    Brown, G A; Willis, M C; Firoozbakhsh, K; Barmada, A; Tessman, C L; Montgomery, A

    2000-01-01

    Eleven complex acetabular fractures in 10 patients were treated by open reduction with internal fixation incorporating computed tomography image guided software intraoperatively. Each of the implants placed under image guidance was found to be accurate and without penetration of the pelvis or joint space. The setup time for the system was minimal. Accuracy in the range of 1 mm was found when registration was precise (eight cases) and was in the range of 3.5 mm when registration was only approximate (three cases). Added benefits included reduced intraoperative fluoroscopic time, less need for more extensive dissection, and obviation of additional surgical approaches in some cases. Compared with a series of similar fractures treated before this image guided series, the reduction in operative time was significant. For patients with complex anterior and posterior combined fractures, the average operation times with and without application of three-dimensional imaging technique were, respectively, 5 hours 15 minutes and 6 hours 14 minutes, revealing 16% less operative time for those who had surgery using image guidance. In the single column fracture group, the operation time for those with three-dimensional imaging application, was 2 hours 58 minutes and for those with traditional surgery, 3 hours 42 minutes, indicating 20% less operative time for those with imaging modality. Intraoperative computed tomography guided imagery was found to be an accurate and suitable method for use in the operative treatment of complex acetabular fractures with substantial displacement.

  7. Improving limited-projection-angle fluorescence molecular tomography using a co-registered x-ray computed tomography scan.

    PubMed

    Radrich, Karin; Ale, Angelique; Ermolayev, Vladimir; Ntziachristos, Vasilis

    2012-12-01

    We examine the improvement in imaging performance, such as axial resolution and signal localization, when employing limited-projection-angle fluorescence molecular tomography (FMT) together with x-ray computed tomography (XCT) measurements versus stand-alone FMT. For this purpose, we employed living mice, bearing a spontaneous lung tumor model, and imaged them with FMT and XCT under identical geometrical conditions using fluorescent probes for cancer targeting. The XCT data was employed, herein, as structural prior information to guide the FMT reconstruction. Gold standard images were provided by fluorescence images of mouse cryoslices, providing the ground truth in fluorescence bio-distribution. Upon comparison of FMT images versus images reconstructed using hybrid FMT and XCT data, we demonstrate marked improvements in image accuracy. This work relates to currently disseminated FMT systems, using limited projection scans, and can be employed to enhance their performance.

  8. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  9. Radiological interpretation of images displayed on tablet computers: a systematic review

    PubMed Central

    Armfield, N R; Smith, A C

    2015-01-01

    Objective: To review the published evidence and to determine if radiological diagnostic accuracy is compromised when images are displayed on a tablet computer and thereby inform practice on using tablet computers for radiological interpretation by on-call radiologists. Methods: We searched the PubMed and EMBASE databases for studies on the diagnostic accuracy or diagnostic reliability of images interpreted on tablet computers. Studies were screened for inclusion based on pre-determined inclusion and exclusion criteria. Studies were assessed for quality and risk of bias using Quality Appraisal of Diagnostic Reliability Studies or the revised Quality Assessment of Diagnostic Accuracy Studies tool. Treatment of studies was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Results: 11 studies met the inclusion criteria. 10 of these studies tested the Apple iPad® (Apple, Cupertino, CA). The included studies reported high sensitivity (84–98%), specificity (74–100%) and accuracy rates (98–100%) for radiological diagnosis. There was no statistically significant difference in accuracy between a tablet computer and a digital imaging and communication in medicine-calibrated control display. There was a near complete consensus from authors on the non-inferiority of diagnostic accuracy of images displayed on a tablet computer. All of the included studies were judged to be at risk of bias. Conclusion: Our findings suggest that the diagnostic accuracy of radiological interpretation is not compromised by using a tablet computer. This result is only relevant to the Apple iPad and to the modalities of CT, MRI and plain radiography. Advances in knowledge: The iPad may be appropriate for an on-call radiologist to use for radiological interpretation. PMID:25882691

  10. Radiological interpretation of images displayed on tablet computers: a systematic review.

    PubMed

    Caffery, L J; Armfield, N R; Smith, A C

    2015-06-01

    To review the published evidence and to determine if radiological diagnostic accuracy is compromised when images are displayed on a tablet computer and thereby inform practice on using tablet computers for radiological interpretation by on-call radiologists. We searched the PubMed and EMBASE databases for studies on the diagnostic accuracy or diagnostic reliability of images interpreted on tablet computers. Studies were screened for inclusion based on pre-determined inclusion and exclusion criteria. Studies were assessed for quality and risk of bias using Quality Appraisal of Diagnostic Reliability Studies or the revised Quality Assessment of Diagnostic Accuracy Studies tool. Treatment of studies was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). 11 studies met the inclusion criteria. 10 of these studies tested the Apple iPad(®) (Apple, Cupertino, CA). The included studies reported high sensitivity (84-98%), specificity (74-100%) and accuracy rates (98-100%) for radiological diagnosis. There was no statistically significant difference in accuracy between a tablet computer and a digital imaging and communication in medicine-calibrated control display. There was a near complete consensus from authors on the non-inferiority of diagnostic accuracy of images displayed on a tablet computer. All of the included studies were judged to be at risk of bias. Our findings suggest that the diagnostic accuracy of radiological interpretation is not compromised by using a tablet computer. This result is only relevant to the Apple iPad and to the modalities of CT, MRI and plain radiography. The iPad may be appropriate for an on-call radiologist to use for radiological interpretation.

  11. TM digital image products for applications. [computer compatible tapes

    NASA Technical Reports Server (NTRS)

    Barker, J. L.; Gunther, F. J.; Abrams, R. B.; Ball, D.

    1984-01-01

    The image characteristics of digital data generated by LANDSAT 4 thematic mapper (TM) are discussed. Digital data from the TM resides in tape files at various stages of image processing. Within each image data file, the image lines are blocked by a factor of either 5 for a computer compatible tape CCT-BT, or 4 for a CCT-AT and CCT-PT; in each format, the image file has a different format. Nominal geometric corrections which provide proper geodetic relationships between different parts of the image are available only for the CCT-PT. It is concluded that detector 3 of band 5 on the TM does not respond; this channel of data needs replacement. The empty bin phenomenon in CCT-AT images results from integer truncations of mixed-mode arithmetric operations.

  12. Computing Challenges in Coded Mask Imaging

    NASA Technical Reports Server (NTRS)

    Skinner, Gerald

    2009-01-01

    This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.

  13. Development of computational small animal models and their applications in preclinical imaging and therapy research.

    PubMed

    Xie, Tianwu; Zaidi, Habib

    2016-01-01

    The development of multimodality preclinical imaging techniques and the rapid growth of realistic computer simulation tools have promoted the construction and application of computational laboratory animal models in preclinical research. Since the early 1990s, over 120 realistic computational animal models have been reported in the literature and used as surrogates to characterize the anatomy of actual animals for the simulation of preclinical studies involving the use of bioluminescence tomography, fluorescence molecular tomography, positron emission tomography, single-photon emission computed tomography, microcomputed tomography, magnetic resonance imaging, and optical imaging. Other applications include electromagnetic field simulation, ionizing and nonionizing radiation dosimetry, and the development and evaluation of new methodologies for multimodality image coregistration, segmentation, and reconstruction of small animal images. This paper provides a comprehensive review of the history and fundamental technologies used for the development of computational small animal models with a particular focus on their application in preclinical imaging as well as nonionizing and ionizing radiation dosimetry calculations. An overview of the overall process involved in the design of these models, including the fundamental elements used for the construction of different types of computational models, the identification of original anatomical data, the simulation tools used for solving various computational problems, and the applications of computational animal models in preclinical research. The authors also analyze the characteristics of categories of computational models (stylized, voxel-based, and boundary representation) and discuss the technical challenges faced at the present time as well as research needs in the future.

  14. Radiation dose and magnification in pelvic X-ray: EOS™ imaging system versus plain radiographs.

    PubMed

    Chiron, P; Demoulin, L; Wytrykowski, K; Cavaignac, E; Reina, N; Murgier, J

    2017-12-01

    In plain pelvic X-ray, magnification makes measurement unreliable. The EOS™ (EOS Imaging, Paris France) imaging system is reputed to reproduce patient anatomy exactly, with a lower radiation dose. This, however, has not been assessed according to patient weight, although both magnification and irradiation are known to vary with weight. We therefore conducted a prospective comparative study, to compare: (1) image magnification and (2) radiation dose between the EOS imaging system and plain X-ray. The EOS imaging system reproduces patient anatomy exactly, regardless of weight, unlike plain X-ray. A single-center comparative study of plain pelvic X-ray and 2D EOS radiography was performed in 183 patients: 186 arthroplasties; 104 male, 81 female; mean age 61.3±13.7years (range, 24-87years). Magnification and radiation dose (dose-area product [DAP]) were compared between the two systems in 186 hips in patients with a mean body-mass index (BMI) of 27.1±5.3kg/m 2 (range, 17.6-42.3kg/m 2 ), including 7 with morbid obesity. Mean magnification was zero using the EOS system, regardless of patient weight, compared to 1.15±0.05 (range, 1-1.32) on plain X-ray (P<10 -5 ). In patients with BMI<25, mean magnification on plain X-ray was 1.15±0.05 (range, 1-1.25) and, in patients with morbid obesity, 1.22±0.06 (range, 1.18-1.32). The mean radiation dose was 8.19±2.63dGy/cm 2 (range, 1.77-14.24) with the EOS system, versus 19.38±12.37dGy/cm 2 (range, 4.77-81.75) with plain X-ray (P<10 -4 ). For BMI >40, mean radiation dose was 9.36±2.57dGy/cm 2 (range, 7.4-14.2) with the EOS system, versus 44.76±22.21 (range, 25.2-81.7) with plain X-ray. Radiation dose increased by 0.20dGy with each extra BMI point for the EOS system, versus 0.74dGy for plain X-ray. Magnification did not vary with patient weight using the EOS system, unlike plain X-ray, and radiation dose was 2.5-fold lower. 3, prospective case-control study. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  15. A low-cost vector processor boosting compute-intensive image processing operations

    NASA Technical Reports Server (NTRS)

    Adorf, Hans-Martin

    1992-01-01

    Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.

  16. System Matrix Analysis for Computed Tomography Imaging

    PubMed Central

    Flores, Liubov; Vidal, Vicent; VerdĂş, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482

  17. Computational imaging of defects in commercial substrates for electronic and photonic devices

    NASA Astrophysics Data System (ADS)

    Fukuzawa, Masayuki; Kashiwagi, Ryo; Yamada, Masayoshi

    2012-03-01

    Computational defect imaging has been performed in commercial substrates for electronic and photonic devices by combining the transmission profile acquired with an imaging type of linear polariscope and the computational algorithm to extract a small amount of birefringence. The computational images of phase retardation δ exhibited spatial inhomogeneity of defect-induced birefringence in GaP, LiNbO3, and SiC substrates, which were not detected by conventional 'visual inspection' based on simple optical refraction or transmission because of poor sensitivity. The typical imaging time was less than 30 seconds for 3-inch diameter substrate with the spatial resolution of 200 μm, while that by scanning polariscope was 2 hours to get the same spatial resolution. Since our proposed technique have been achieved high sensitivity, short imaging time, and wide coverage of substrate materials, which are practical advantages over the laboratory-scale apparatus such as X-ray topography and electron microscope, it is useful for nondestructive inspection of various commercial substrates in production of electronic and photonic devices.

  18. Computational model of lightness perception in high dynamic range imaging

    NASA Astrophysics Data System (ADS)

    Krawczyk, Grzegorz; Myszkowski, Karol; Seidel, Hans-Peter

    2006-02-01

    An anchoring theory of lightness perception by Gilchrist et al. [1999] explains many characteristics of human visual system such as lightness constancy and its spectacular failures which are important in the perception of images. The principal concept of this theory is the perception of complex scenes in terms of groups of consistent areas (frameworks). Such areas, following the gestalt theorists, are defined by the regions of common illumination. The key aspect of the image perception is the estimation of lightness within each framework through the anchoring to the luminance perceived as white, followed by the computation of the global lightness. In this paper we provide a computational model for automatic decomposition of HDR images into frameworks. We derive a tone mapping operator which predicts lightness perception of the real world scenes and aims at its accurate reproduction on low dynamic range displays. Furthermore, such a decomposition into frameworks opens new grounds for local image analysis in view of human perception.

  19. Molar axis estimation from computed tomography images.

    PubMed

    Dongxia Zhang; Yangzhou Gan; Zeyang Xia; Xinwen Zhou; Shoubin Liu; Jing Xiong; Guanglin Li

    2016-08-01

    Estimation of tooth axis is needed for some clinical dental treatment. Existing methods require to segment the tooth volume from Computed Tomography (CT) images, and then estimate the axis from the tooth volume. However, they may fail during estimating molar axis due to that the tooth segmentation from CT images is challenging and current segmentation methods may get poor segmentation results especially for these molars with angle which will result in the failure of axis estimation. To resolve this problem, this paper proposes a new method for molar axis estimation from CT images. The key innovation point is that: instead of estimating the 3D axis of each molar from the segmented volume, the method estimates the 3D axis from two projection images. The method includes three steps. (1) The 3D images of each molar are projected to two 2D image planes. (2) The molar contour are segmented and the contour's 2D axis are extracted in each 2D projection image. Principal Component Analysis (PCA) and a modified symmetry axis detection algorithm are employed to extract the 2D axis from the segmented molar contour. (3) A 3D molar axis is obtained by combining the two 2D axes. Experimental results verified that the proposed method was effective to estimate the axis of molar from CT images.

  20. SU-E-I-13: Evaluation of Metal Artifact Reduction (MAR) Software On Computed Tomography (CT) Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, V; Kohli, K

    2015-06-15

    Purpose: A new commercially available metal artifact reduction (MAR) software in computed tomography (CT) imaging was evaluated with phantoms in the presence of metals. The goal was to assess the ability of the software to restore the CT number in the vicinity of the metals without impacting the image quality. Methods: A Catphan 504 was scanned with a GE Optima RT 580 CT scanner (GE Healthcare, Milwaukee, WI) and the images were reconstructed with and without the MAR software. Both datasets were analyzed with Image Owl QA software (Image Owl Inc, Greenwich, NY). CT number sensitometry, MTF, low contrast, uniformity,more » noise and spatial accuracy were compared for scans with and without MAR software. In addition, an in-house made phantom was scanned with and without a stainless steel insert at three different locations. The accuracy of the CT number and metal insert dimension were investigated as well. Results: Comparisons between scans with and without MAR algorithm on the Catphan phantom demonstrate similar results for image quality. However, noise was slightly higher for the MAR algorithm. Evaluation of the CT number at various locations of the in-house made phantom was also performed. The baseline HU, obtained from the scan without metal insert, was compared to scans with the stainless steel insert at 3 different locations. The HU difference between the baseline scan versus metal scan was improved when the MAR algorithm was applied. In addition, the physical diameter of the stainless steel rod was over-estimated by the MAR algorithm by 0.9 mm. Conclusion: This work indicates with the presence of metal in CT scans, the MAR algorithm is capable of providing a more accurate CT number without compromising the overall image quality. Future work will include the dosimetric impact on the MAR algorithm.« less

  1. Spectral prior image constrained compressed sensing (spectral PICCS) for photon-counting computed tomography

    NASA Astrophysics Data System (ADS)

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-09-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43-73%) without sacrificing CT number accuracy or spatial resolution.

  2. Spectral Prior Image Constrained Compressed Sensing (Spectral PICCS) for Photon-Counting Computed Tomography

    PubMed Central

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-01-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in-vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43~73%) without sacrificing CT number accuracy or spatial resolution. PMID:27551878

  3. [Computational medical imaging (radiomics) and potential for immuno-oncology].

    PubMed

    Sun, R; Limkin, E J; Dercle, L; Reuzé, S; Zacharaki, E I; Chargari, C; Schernberg, A; Dirand, A S; Alexis, A; Paragios, N; Deutsch, É; Ferté, C; Robert, C

    2017-10-01

    The arrival of immunotherapy has profoundly changed the management of multiple cancers, obtaining unexpected tumour responses. However, until now, the majority of patients do not respond to these new treatments. The identification of biomarkers to determine precociously responding patients is a major challenge. Computational medical imaging (also known as radiomics) is a promising and rapidly growing discipline. This new approach consists in the analysis of high-dimensional data extracted from medical imaging, to further describe tumour phenotypes. This approach has the advantages of being non-invasive, capable of evaluating the tumour and its microenvironment in their entirety, thus characterising spatial heterogeneity, and being easily repeatable over time. The end goal of radiomics is to determine imaging biomarkers as decision support tools for clinical practice and to facilitate better understanding of cancer biology, allowing the assessment of the changes throughout the evolution of the disease and the therapeutic sequence. This review will develop the process of computational imaging analysis and present its potential in immuno-oncology. Copyright © 2017 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  4. Enhancements in medicine by integrating content based image retrieval in computer-aided diagnosis

    NASA Astrophysics Data System (ADS)

    Aggarwal, Preeti; Sardana, H. K.

    2010-02-01

    Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. With cad, radiologists use the computer output as a "second opinion" and make the final decisions. Retrieving images is a useful tool to help radiologist to check medical image and diagnosis. The impact of contentbased access to medical images is frequently reported but existing systems are designed for only a particular context of diagnosis. The challenge in medical informatics is to develop tools for analyzing the content of medical images and to represent them in a way that can be efficiently searched and compared by the physicians. CAD is a concept established by taking into account equally the roles of physicians and computers. To build a successful computer aided diagnostic system, all the relevant technologies, especially retrieval need to be integrated in such a manner that should provide effective and efficient pre-diagnosed cases with proven pathology for the current case at the right time. In this paper, it is suggested that integration of content-based image retrieval (CBIR) in cad can bring enormous results in medicine especially in diagnosis. This approach is also compared with other approaches by highlighting its advantages over those approaches.

  5. Graphics Processing Unit-Accelerated Nonrigid Registration of MR Images to CT Images During CT-Guided Percutaneous Liver Tumor Ablations.

    PubMed

    Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G; Shekher, Raj; Hata, Nobuhiko

    2015-06-01

    Accuracy and speed are essential for the intraprocedural nonrigid magnetic resonance (MR) to computed tomography (CT) image registration in the assessment of tumor margins during CT-guided liver tumor ablations. Although both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique on the basis of volume subdivision with hardware acceleration using a graphics processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice similarity coefficient [DSC] and 95% Hausdorff distance [HD]) and total processing time including contouring of ROIs and computation were compared using a paired Student t test. Accuracies of the GPU-accelerated registrations and B-spline registrations, respectively, were 88.3 ± 3.7% versus 89.3 ± 4.9% (P = .41) for DSC and 13.1 ± 5.2 versus 11.4 ± 6.3 mm (P = .15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 versus 557 ± 116 seconds (P < .000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (P = .71). The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU

  6. Impact of Local Sensors

    NASA Technical Reports Server (NTRS)

    Watson, Leela R.; Bauman, William H., III

    2008-01-01

    Forecasters at the 45th Weather Squadron (45 WS) use observations from the Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) wind tower network and the CCAFS (XMR) daily rawinsonde observations (RAOB) to issue and verify wind advisories and warnings for operations. These observations are also used by the National Weather Service (NWS) Spaceflight Meteorology Group (SMG) in Houston, Texas and the NWS Melbourne, Florida (NWS MLB) to initialize their locally-run mesoscale models. In addition, SMG uses these observations to support shuttle landings at the Shuttle Landing Facility (SLF). Due to impending budget cuts, some or all of the KSC/CCAFS wind towers on the east-central Florida mainland and the XMR RAOBs may be eliminated. The locations of the mainland towers and XMR RAOB site are shown in Figure I. The loss of these data may impact the forecast capability of the 45 WS, SMG and NWS MLB. The AMU was tasked to conduct an objective independent modeling study to help determine how important these observations are to the accuracy of the model output used by the forecasters. To accomplish this, the Applied Meteorology Unit (AMU) performed a sensitivity study using the Weather Research and Forecasting (WRF) model initialized with and without KSC/CCAFS wind tower and XMR RAOB data.

  7. Soil structure characterized using computed tomographic images

    Treesearch

    Zhanqi Cheng; Stephen H. Anderson; Clark J. Gantzer; J. W. Van Sambeek

    2003-01-01

    Fractal analysis of soil structure is a relatively new method for quantifying the effects of management systems on soil properties and quality. The objective of this work was to explore several methods of studying images to describe and quantify structure of soils under forest management. This research uses computed tomography and a topological method called Multiple...

  8. Intranasal dexmedetomidine for sedation for pediatric computed tomography imaging.

    PubMed

    Mekitarian Filho, Eduardo; Robinson, Fay; de Carvalho, Werther Brunow; Gilio, Alfredo Elias; Mason, Keira P

    2015-05-01

    This prospective observational pilot study evaluated the aerosolized intranasal route for dexmedetomidine as a safe, effective, and efficient option for infant and pediatric sedation for computed tomography imaging. The mean time to sedation was 13.4 minutes, with excellent image quality, no failed sedations, or significant adverse events. Registered with ClinicalTrials.gov: NCT01900405. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Development of computational small animal models and their applications in preclinical imaging and therapy research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Tianwu; Zaidi, Habib, E-mail: habib.zaidi@hcuge.ch; Geneva Neuroscience Center, Geneva University, Geneva CH-1205

    The development of multimodality preclinical imaging techniques and the rapid growth of realistic computer simulation tools have promoted the construction and application of computational laboratory animal models in preclinical research. Since the early 1990s, over 120 realistic computational animal models have been reported in the literature and used as surrogates to characterize the anatomy of actual animals for the simulation of preclinical studies involving the use of bioluminescence tomography, fluorescence molecular tomography, positron emission tomography, single-photon emission computed tomography, microcomputed tomography, magnetic resonance imaging, and optical imaging. Other applications include electromagnetic field simulation, ionizing and nonionizing radiation dosimetry, and themore » development and evaluation of new methodologies for multimodality image coregistration, segmentation, and reconstruction of small animal images. This paper provides a comprehensive review of the history and fundamental technologies used for the development of computational small animal models with a particular focus on their application in preclinical imaging as well as nonionizing and ionizing radiation dosimetry calculations. An overview of the overall process involved in the design of these models, including the fundamental elements used for the construction of different types of computational models, the identification of original anatomical data, the simulation tools used for solving various computational problems, and the applications of computational animal models in preclinical research. The authors also analyze the characteristics of categories of computational models (stylized, voxel-based, and boundary representation) and discuss the technical challenges faced at the present time as well as research needs in the future.« less

  10. Metal Artifact Suppression in Dental Cone Beam Computed Tomography Images Using Image Processing Techniques.

    PubMed

    Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh

    2018-01-01

    Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images.

  11. Game-Based Practice versus Traditional Practice in Computer-Based Writing Strategy Training: Effects on Motivation and Achievement

    ERIC Educational Resources Information Center

    Proske, Antje; Roscoe, Rod D.; McNamara, Danielle S.

    2014-01-01

    Achieving sustained student engagement with practice in computer-based writing strategy training can be a challenge. One potential solution is to foster engagement by embedding practice in educational games; yet there is currently little research comparing the effectiveness of game-based practice versus more traditional forms of practice. In this…

  12. Comparing the imaging performance of computed super resolution and magnification tomosynthesis

    NASA Astrophysics Data System (ADS)

    Maidment, Tristan D.; Vent, Trevor L.; Ferris, William S.; Wurtele, David E.; Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2017-03-01

    Computed super-resolution (SR) is a method of reconstructing images with pixels that are smaller than the detector element size; superior spatial resolution is achieved through the elimination of aliasing and alteration of the sampling function imposed by the reconstructed pixel aperture. By comparison, magnification mammography is a method of projection imaging that uses geometric magnification to increase spatial resolution. This study explores the development and application of magnification digital breast tomosynthesis (MDBT). Four different acquisition geometries are compared in terms of various image metrics. High-contrast spatial resolution was measured in various axes using a lead star pattern. A modified Defrise phantom was used to determine the low-frequency spatial resolution. An anthropomorphic phantom was used to simulate clinical imaging. Each experiment was conducted at three different magnifications: contact (1.04x), MAG1 (1.3x), and MAG2 (1.6x). All images were taken on our next generation tomosynthesis system, an in-house solution designed to optimize SR. It is demonstrated that both computed SR and MDBT (MAG1 and MAG2) provide improved spatial resolution over non-SR contact imaging. To achieve the highest resolution, SR and MDBT should be combined. However, MDBT is adversely affected by patient motion at higher magnifications. In addition, MDBT requires more radiation dose and delays diagnosis, since MDBT would be conducted upon recall. By comparison, SR can be conducted with the original screening data. In conclusion, this study demonstrates that computed SR and MDBT are both viable methods of imaging the breast.

  13. Infrared imaging - A validation technique for computational fluid dynamics codes used in STOVL applications

    NASA Technical Reports Server (NTRS)

    Hardman, R. R.; Mahan, J. R.; Smith, M. H.; Gelhausen, P. A.; Van Dalsem, W. R.

    1991-01-01

    The need for a validation technique for computational fluid dynamics (CFD) codes in STOVL applications has led to research efforts to apply infrared thermal imaging techniques to visualize gaseous flow fields. Specifically, a heated, free-jet test facility was constructed. The gaseous flow field of the jet exhaust was characterized using an infrared imaging technique in the 2 to 5.6 micron wavelength band as well as conventional pitot tube and thermocouple methods. These infrared images are compared to computer-generated images using the equations of radiative exchange based on the temperature distribution in the jet exhaust measured with the thermocouple traverses. Temperature and velocity measurement techniques, infrared imaging, and the computer model of the infrared imaging technique are presented and discussed. From the study, it is concluded that infrared imaging techniques coupled with the radiative exchange equations applied to CFD models are a valid method to qualitatively verify CFD codes used in STOVL applications.

  14. Identifying local structural states in atomic imaging by computer vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laanait, Nouamane; Ziatdinov, Maxim; He, Qian

    The availability of atomically resolved imaging modalities enables an unprecedented view into the local structural states of materials, which manifest themselves by deviations from the fundamental assumptions of periodicity and symmetry. Consequently, approaches that aim to extract these local structural states from atomic imaging data with minimal assumptions regarding the average crystallographic configuration of a material are indispensable to advances in structural and chemical investigations of materials. Here, we present an approach to identify and classify local structural states that is rooted in computer vision. This approach introduces a definition of a structural state that is composed of both localmore » and non-local information extracted from atomically resolved images, and is wholly untethered from the familiar concepts of symmetry and periodicity. Instead, this approach relies on computer vision techniques such as feature detection, and concepts such as scale-invariance. We present the fundamental aspects of local structural state extraction and classification by application to simulated scanning transmission electron microscopy images, and analyze the robustness of this approach in the presence of common instrumental factors such as noise, limited spatial resolution, and weak contrast. Finally, we apply this computer vision-based approach for the unsupervised detection and classification of local structural states in an experimental electron micrograph of a complex oxides interface, and a scanning tunneling micrograph of a defect engineered multilayer graphene surface.« less

  15. Identifying local structural states in atomic imaging by computer vision

    DOE PAGES

    Laanait, Nouamane; Ziatdinov, Maxim; He, Qian; ...

    2016-11-02

    The availability of atomically resolved imaging modalities enables an unprecedented view into the local structural states of materials, which manifest themselves by deviations from the fundamental assumptions of periodicity and symmetry. Consequently, approaches that aim to extract these local structural states from atomic imaging data with minimal assumptions regarding the average crystallographic configuration of a material are indispensable to advances in structural and chemical investigations of materials. Here, we present an approach to identify and classify local structural states that is rooted in computer vision. This approach introduces a definition of a structural state that is composed of both localmore » and non-local information extracted from atomically resolved images, and is wholly untethered from the familiar concepts of symmetry and periodicity. Instead, this approach relies on computer vision techniques such as feature detection, and concepts such as scale-invariance. We present the fundamental aspects of local structural state extraction and classification by application to simulated scanning transmission electron microscopy images, and analyze the robustness of this approach in the presence of common instrumental factors such as noise, limited spatial resolution, and weak contrast. Finally, we apply this computer vision-based approach for the unsupervised detection and classification of local structural states in an experimental electron micrograph of a complex oxides interface, and a scanning tunneling micrograph of a defect engineered multilayer graphene surface.« less

  16. Computer-generated 3D ultrasound images of the carotid artery

    NASA Technical Reports Server (NTRS)

    Selzer, Robert H.; Lee, Paul L.; Lai, June Y.; Frieden, Howard J.; Blankenhorn, David H.

    1989-01-01

    A method is under development to measure carotid artery lesions from a computer-generated three-dimensional ultrasound image. For each image, the position of the transducer in six coordinates (x, y, z, azimuth, elevation, and roll) is recorded and used to position each B-mode picture element in its proper spatial position in a three-dimensional memory array. After all B-mode images have been assembled in the memory, the three-dimensional image is filtered and resampled to produce a new series of parallel-plane two-dimensional images from which arterial boundaries are determined using edge tracking methods.

  17. Computer-generated 3D ultrasound images of the carotid artery

    NASA Astrophysics Data System (ADS)

    Selzer, Robert H.; Lee, Paul L.; Lai, June Y.; Frieden, Howard J.; Blankenhorn, David H.

    A method is under development to measure carotid artery lesions from a computer-generated three-dimensional ultrasound image. For each image, the position of the transducer in six coordinates (x, y, z, azimuth, elevation, and roll) is recorded and used to position each B-mode picture element in its proper spatial position in a three-dimensional memory array. After all B-mode images have been assembled in the memory, the three-dimensional image is filtered and resampled to produce a new series of parallel-plane two-dimensional images from which arterial boundaries are determined using edge tracking methods.

  18. Classification of large-scale fundus image data sets: a cloud-computing framework.

    PubMed

    Roychowdhury, Sohini

    2016-08-01

    Large medical image data sets with high dimensionality require substantial amount of computation time for data creation and data processing. This paper presents a novel generalized method that finds optimal image-based feature sets that reduce computational time complexity while maximizing overall classification accuracy for detection of diabetic retinopathy (DR). First, region-based and pixel-based features are extracted from fundus images for classification of DR lesions and vessel-like structures. Next, feature ranking strategies are used to distinguish the optimal classification feature sets. DR lesion and vessel classification accuracies are computed using the boosted decision tree and decision forest classifiers in the Microsoft Azure Machine Learning Studio platform, respectively. For images from the DIARETDB1 data set, 40 of its highest-ranked features are used to classify four DR lesion types with an average classification accuracy of 90.1% in 792 seconds. Also, for classification of red lesion regions and hemorrhages from microaneurysms, accuracies of 85% and 72% are observed, respectively. For images from STARE data set, 40 high-ranked features can classify minor blood vessels with an accuracy of 83.5% in 326 seconds. Such cloud-based fundus image analysis systems can significantly enhance the borderline classification performances in automated screening systems.

  19. Computed tomography versus digital subtraction angiography for the diagnosis of obscure gastrointestinal bleeding.

    PubMed

    Wildgruber, Moritz; Wrede, Christian E; Zorger, Niels; Müller-Wille, René; Hamer, Okka W; Zeman, Florian; Stroszczynski, Christian; Heiss, Peter

    2017-03-01

    The diagnostic yield of computed tomography angiography (CTA) compared to digital subtraction angiography (DSA) for major obscure gastrointestinal bleeding (OGIB) is not known. Aim of the study was to prospectively evaluate the diagnostic yield of CTA versus DSA for the diagnosis of major OGIB. The institutional review board approved the study and informed consent was obtained from each patient. Patients with major OGIB were prospectively enrolled to undergo both CTA and DSA. Two blinded radiologists each reviewed the CTA and DSA images retrospectively and independently. Contrast material extravasation into the gastrointestinal lumen was considered diagnostic for active bleeding. Primary end point of the study was the diagnostic yield, defined as the frequency a technique identified an active bleeding or a potential bleeding lesion. The diagnostic yield of CTA and DSA were compared by McNemar's test. 24 consecutive patients (11 men; median age 64 years) were included. CTA and DSA identified an active bleeding or a potential bleeding lesion in 92% (22 of 24 patients; 95% CI 72%-99%) and 29% (7 of 24 patients; 95% CI 12%-49%) of patients, respectively (p<0.001). CTA and DSA identified an active bleeding in 42% (10 of 24; 95% CI 22%-63%) and 21% (5 of 24; 95% CI 7%-42%) of patients, respectively (p=0.06). Due to the lower invasiveness and higher diagnostic yield CTA should be favored over DSA for the diagnosis of major OGIB. Copyright © 2016. Published by Elsevier B.V.

  20. Intelligent Image Based Computer Aided Education (IICAE)

    NASA Astrophysics Data System (ADS)

    David, Amos A.; Thiery, Odile; Crehange, Marion

    1989-03-01

    Artificial Intelligence (AI) has found its way into Computer Aided Education (CAE), and there are several systems constructed to put in evidence its interesting advantages. We believe that images (graphic or real) play an important role in learning. However, the use of images, outside their use as illustration, makes it necessary to have applications such as AI. We shall develop the application of AI in an image based CAE and briefly present the system under construction to put in evidence our concept. We shall also elaborate a methodology for constructing such a system. Futhermore we shall briefly present the pedagogical and psychological activities in a learning process. Under the pedagogical and psychological aspect of learning, we shall develop areas such as the importance of image in learning both as pedagogical objects as well as means for obtaining psychological information about the learner. We shall develop the learner's model, its use, what to build into it and how. Under the application of AI in an image based CAE, we shall develop the importance of AI in exploiting the knowledge base in the learning environment and its application as a means of implementing pedagogical strategies.

  1. A configurable distributed high-performance computing framework for satellite's TDI-CCD imaging simulation

    NASA Astrophysics Data System (ADS)

    Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang

    2010-11-01

    This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.

  2. Computed tomography imaging and angiography - principles.

    PubMed

    Kamalian, Shervin; Lev, Michael H; Gupta, Rajiv

    2016-01-01

    The evaluation of patients with diverse neurologic disorders was forever changed in the summer of 1973, when the first commercial computed tomography (CT) scanners were introduced. Until then, the detection and characterization of intracranial or spinal lesions could only be inferred by limited spatial resolution radioisotope scans, or by the patterns of tissue and vascular displacement on invasive pneumoencaphalography and direct carotid puncture catheter arteriography. Even the earliest-generation CT scanners - which required tens of minutes for the acquisition and reconstruction of low-resolution images (128×128 matrix) - could, based on density, noninvasively distinguish infarct, hemorrhage, and other mass lesions with unprecedented accuracy. Iodinated, intravenous contrast added further sensitivity and specificity in regions of blood-brain barrier breakdown. The advent of rapid multidetector row CT scanning in the early 1990s created renewed enthusiasm for CT, with CT angiography largely replacing direct catheter angiography. More recently, iterative reconstruction postprocessing techniques have made possible high spatial resolution, reduced noise, very low radiation dose CT scanning. The speed, spatial resolution, contrast resolution, and low radiation dose capability of present-day scanners have also facilitated dual-energy imaging which, like magnetic resonance imaging, for the first time, has allowed tissue-specific CT imaging characterization of intracranial pathology. © 2016 Elsevier B.V. All rights reserved.

  3. Computational Methods for Nanoscale X-ray Computed Tomography Image Analysis of Fuel Cell and Battery Materials

    NASA Astrophysics Data System (ADS)

    Kumar, Arjun S.

    Over the last fifteen years, there has been a rapid growth in the use of high resolution X-ray computed tomography (HRXCT) imaging in material science applications. We use it at nanoscale resolutions up to 50 nm (nano-CT) for key research problems in large scale operation of polymer electrolyte membrane fuel cells (PEMFC) and lithium-ion (Li-ion) batteries in automotive applications. PEMFC are clean energy sources that electrochemically react with hydrogen gas to produce water and electricity. To reduce their costs, capturing their electrode nanostructure has become significant in modeling and optimizing their performance. For Li-ion batteries, a key challenge in increasing their scope for the automotive industry is Li metal dendrite growth. Li dendrites are structures of lithium with 100 nm features of interest that can grow chaotically within a battery and eventually lead to a short-circuit. HRXCT imaging is an effective diagnostics tool for such applications as it is a non-destructive method of capturing the 3D internal X-ray absorption coefficient of materials from a large series of 2D X-ray projections. Despite a recent push to use HRXCT for quantitative information on material samples, there is a relative dearth of computational tools in nano-CT image processing and analysis. Hence, we focus on developing computational methods for nano-CT image analysis of fuel cell and battery materials as required by the limitations in material samples and the imaging environment. The first problem we address is the segmentation of nano-CT Zernike phase contrast images. Nano-CT instruments are equipped with Zernike phase contrast optics to distinguish materials with a low difference in X-ray absorption coefficient by phase shifting the X-ray wave that is not diffracted by the sample. However, it creates image artifacts that hinder the use of traditional image segmentation techniques. To restore such images, we setup an inverse problem by modeling the X-ray phase contrast

  4. Computational and design methods for advanced imaging

    NASA Astrophysics Data System (ADS)

    Birch, Gabriel C.

    This dissertation merges the optical design and computational aspects of imaging systems to create novel devices that solve engineering problems in optical science and attempts to expand the solution space available to the optical designer. This dissertation is divided into two parts: the first discusses a new active illumination depth sensing modality, while the second part discusses a passive illumination system called plenoptic, or lightfield, imaging. The new depth sensing modality introduced in part one is called depth through controlled aberration. This technique illuminates a target with a known, aberrated projected pattern and takes an image using a traditional, unmodified imaging system. Knowing how the added aberration in the projected pattern changes as a function of depth, we are able to quantitatively determine depth of a series of points from the camera. A major advantage this method permits is the ability for illumination and imaging axes to be coincident. Plenoptic cameras capture both spatial and angular data simultaneously. This dissertation present a new set of parameters that permit the design and comparison of plenoptic devices outside the traditionally published plenoptic 1.0 and plenoptic 2.0 configurations. Additionally, a series of engineering advancements are presented, including full system raytraces of raw plenoptic images, Zernike compression techniques of raw image files, and non-uniform lenslet arrays to compensate for plenoptic system aberrations. Finally, a new snapshot imaging spectrometer is proposed based off the plenoptic configuration.

  5. Novel Image Quality Control Systems(Add-On). Innovative Computational Methods for Inverse Problems in Optical and SAR Imaging

    DTIC Science & Technology

    2007-02-28

    Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex Medium Response, International Journal of Imaging Systems and...1767-1782, 2006. 31. Z. Mu, R. Plemmons, and P. Santago. Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex...rigorous mathematical and computational research on inverse problems in optical imaging of direct interest to the Army and also the intelligence agencies

  6. Metal Artifact Suppression in Dental Cone Beam Computed Tomography Images Using Image Processing Techniques

    PubMed Central

    Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh

    2018-01-01

    Background: Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. Methods: In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Results: Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Conclusions: Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images. PMID:29535920

  7. Adaptive noise correction of dual-energy computed tomography images.

    PubMed

    Maia, Rafael Simon; Jacob, Christian; Hara, Amy K; Silva, Alvin C; Pavlicek, William; Mitchell, J Ross

    2016-04-01

    Noise reduction in material density images is a necessary preprocessing step for the correct interpretation of dual-energy computed tomography (DECT) images. In this paper we describe a new method based on a local adaptive processing to reduce noise in DECT images An adaptive neighborhood Wiener (ANW) filter was implemented and customized to use local characteristics of material density images. The ANW filter employs a three-level wavelet approach, combined with the application of an anisotropic diffusion filter. Material density images and virtual monochromatic images are noise corrected with two resulting noise maps. The algorithm was applied and quantitatively evaluated in a set of 36 images. From that set of images, three are shown here, and nine more are shown in the online supplementary material. Processed images had higher signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) than the raw material density images. The average improvements in SNR and CNR for the material density images were 56.5 and 54.75%, respectively. We developed a new DECT noise reduction algorithm. We demonstrate throughout a series of quantitative analyses that the algorithm improves the quality of material density images and virtual monochromatic images.

  8. Development of a Simple Image Processing Application that Makes Abdominopelvic Tumor Visible on Positron Emission Tomography/Computed Tomography Image.

    PubMed

    Pandey, Anil Kumar; Saroha, Kartik; Sharma, Param Dev; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-01-01

    In this study, we have developed a simple image processing application in MATLAB that uses suprathreshold stochastic resonance (SSR) and helps the user to visualize abdominopelvic tumor on the exported prediuretic positron emission tomography/computed tomography (PET/CT) images. A brainstorming session was conducted for requirement analysis for the program. It was decided that program should load the screen captured PET/CT images and then produces output images in a window with a slider control that should enable the user to view the best image that visualizes the tumor, if present. The program was implemented on personal computer using Microsoft Windows and MATLAB R2013b. The program has option for the user to select the input image. For the selected image, it displays output images generated using SSR in a separate window having a slider control. The slider control enables the user to view images and select one which seems to provide the best visualization of the area(s) of interest. The developed application enables the user to select, process, and view output images in the process of utilizing SSR to detect the presence of abdominopelvic tumor on prediuretic PET/CT image.

  9. The application of computer image analysis in life sciences and environmental engineering

    NASA Astrophysics Data System (ADS)

    Mazur, R.; Lewicki, A.; Przybył, K.; Zaborowicz, M.; Koszela, K.; Boniecki, P.; Mueller, W.; Raba, B.

    2014-04-01

    The main aim of the article was to present research on the application of computer image analysis in Life Science and Environmental Engineering. The authors used different methods of computer image analysis in developing of an innovative biotest in modern biomonitoring of water quality. Created tools were based on live organisms such as bioindicators Lemna minor L. and Hydra vulgaris Pallas as well as computer image analysis method in the assessment of negatives reactions during the exposition of the organisms to selected water toxicants. All of these methods belong to acute toxicity tests and are particularly essential in ecotoxicological assessment of water pollutants. Developed bioassays can be used not only in scientific research but are also applicable in environmental engineering and agriculture in the study of adverse effects on water quality of various compounds used in agriculture and industry.

  10. Computational electromagnetics: the physics of smooth versus oscillatory fields.

    PubMed

    Chew, W C

    2004-03-15

    This paper starts by discussing the difference in the physics between solutions to Laplace's equation (static) and Maxwell's equations for dynamic problems (Helmholtz equation). Their differing physical characters are illustrated by how the two fields convey information away from their source point. The paper elucidates the fact that their differing physical characters affect the use of Laplacian field and Helmholtz field in imaging. They also affect the design of fast computational algorithms for electromagnetic scattering problems. Specifically, a comparison is made between fast algorithms developed using wavelets, the simple fast multipole method, and the multi-level fast multipole algorithm for electrodynamics. The impact of the physical characters of the dynamic field on the parallelization of the multi-level fast multipole algorithm is also discussed. The relationship of diagonalization of translators to group theory is presented. Finally, future areas of research for computational electromagnetics are described.

  11. Strategic Computing Computer Vision: Taking Image Understanding To The Next Plateau

    NASA Astrophysics Data System (ADS)

    Simpson, R. L., Jr.

    1987-06-01

    The overall objective of the Strategic Computing (SC) Program of the Defense Advanced Research Projects Agency (DARPA) is to develop and demonstrate a new generation of machine intelligence technology which can form the basis for more capable military systems in the future and also maintain a position of world leadership for the US in computer technology. Begun in 1983, SC represents a focused research strategy for accelerating the evolution of new technology and its rapid prototyping in realistic military contexts. Among the very ambitious demonstration prototypes being developed within the SC Program are: 1) the Pilot's Associate which will aid the pilot in route planning, aerial target prioritization, evasion of missile threats, and aircraft emergency safety procedures during flight; 2) two battle management projects one for the for the Army, which is just getting started, called the AirLand Battle Management program (ALBM) which will use knowledge-based systems technology to assist in the generation and evaluation of tactical options and plans at the Corps level; 3) the other more established program for the Navy is the Fleet Command Center Battle Management Program (FCCBIVIP) at Pearl Harbor. The FCCBMP is employing knowledge-based systems and natural language technology in a evolutionary testbed situated in an operational command center to demonstrate and evaluate intelligent decision-aids which can assist in the evaluation of fleet readiness and explore alternatives during contingencies; and 4) the Autonomous Land Vehicle (ALV) which integrates in a major robotic testbed the technologies for dynamic image understanding, knowledge-based route planning with replanning during execution, hosted on new advanced parallel architectures. The goal of the Strategic Computing computer vision technology base (SCVision) is to develop generic technology that will enable the construction of complete, robust, high performance image understanding systems to support a wide

  12. Computer-aided diagnostic detection system of venous beading in retinal images

    NASA Astrophysics Data System (ADS)

    Yang, Ching-Wen; Ma, DyeJyun; Chao, ShuennChing; Wang, ChuinMu; Wen, Chia-Hsien; Lo, ChienShun; Chung, Pau-Choo; Chang, Chein-I.

    2000-05-01

    The detection of venous beading in retinal images provides an early sign of diabetic retinopathy and plays an important role as a preprocessing step in diagnosing ocular diseases. We present a computer-aided diagnostic system to automatically detect venous beading of blood vessels. It comprises of two modules, referred to as the blood vessel extraction module and the venus beading detection module. The former uses a bell-shaped Gaussian kernel with 12 azimuths to extract blood vessels while the latter applies a neural network-based shape cognitron to detect venous beading among the extracted blood vessels for diagnosis. Both modules are fully computer-automated. To evaluate the proposed system, 61 retinal images (32 beaded and 29 normal images) are used for performance evaluation.

  13. Computer-Assisted Digital Image Analysis of Plus Disease in Retinopathy of Prematurity.

    PubMed

    Kemp, Pavlina S; VanderVeen, Deborah K

    2016-01-01

    The objective of this study is to review the current state and role of computer-assisted analysis in diagnosis of plus disease in retinopathy of prematurity. Diagnosis and documentation of retinopathy of prematurity are increasingly being supplemented by digital imaging. The incorporation of computer-aided techniques has the potential to add valuable information and standardization regarding the presence of plus disease, an important criterion in deciding the necessity of treatment of vision-threatening retinopathy of prematurity. A review of literature found that several techniques have been published examining the process and role of computer aided analysis of plus disease in retinopathy of prematurity. These techniques use semiautomated image analysis techniques to evaluate retinal vascular dilation and tortuosity, using calculated parameters to evaluate presence or absence of plus disease. These values are then compared with expert consensus. The study concludes that computer-aided image analysis has the potential to use quantitative and objective criteria to act as a supplemental tool in evaluating for plus disease in the setting of retinopathy of prematurity.

  14. Use of radiography, computed tomography and magnetic resonance imaging for evaluation of navicular syndrome in the horse.

    PubMed

    Widmer, W R; Buckwalter, K A; Fessler, J F; Hill, M A; VanSickle, D C; Ivancevich, S

    2000-01-01

    Radiographic evaluation of navicular syndrome is problematic because of its inconsistent correlation with clinical signs. Scintigraphy often yields false positive and false negative results and diagnostic ultrasound is of limited value. Therefore, we assessed the use of computed tomography and magnetic resonance imaging in a horse with clinical and radiographic signs of navicular syndrome. Cadaver specimens were examined with spiral computed tomographic and high-field magnetic resonance scanners and images were correlated with pathologic findings. Radiographic changes consisted of bony remodeling, which included altered synovial fossae, increased medullary opacity, cyst formation and shape change. These osseous changes were more striking and more numerous on computed tomographic and magnetic resonance images. They were most clearly defined with computed tomography. Many osseous changes seen with computed tomography and magnetic resonance imaging were not radiographically evident. Histologically confirmed soft tissue alterations of the deep digital flexor tendon, impar ligament and marrow were identified with magnetic resonance imaging, but not with conventional radiography. Because of their multiplanar capability and tomographic nature, computed tomography and magnetic resonance imaging surpass conventional radiography for navicular imaging, facilitating earlier, more accurate diagnosis. Current advances in imaging technology should make these imaging modalities available to equine practitioners in the future.

  15. A large-scale solar dynamics observatory image dataset for computer vision applications.

    PubMed

    Kucuk, Ahmet; Banda, Juan M; Angryk, Rafal A

    2017-01-01

    The National Aeronautics Space Agency (NASA) Solar Dynamics Observatory (SDO) mission has given us unprecedented insight into the Sun's activity. By capturing approximately 70,000 images a day, this mission has created one of the richest and biggest repositories of solar image data available to mankind. With such massive amounts of information, researchers have been able to produce great advances in detecting solar events. In this resource, we compile SDO solar data into a single repository in order to provide the computer vision community with a standardized and curated large-scale dataset of several hundred thousand solar events found on high resolution solar images. This publicly available resource, along with the generation source code, will accelerate computer vision research on NASA's solar image data by reducing the amount of time spent performing data acquisition and curation from the multiple sources we have compiled. By improving the quality of the data with thorough curation, we anticipate a wider adoption and interest from the computer vision to the solar physics community.

  16. Single instruction computer architecture and its application in image processing

    NASA Astrophysics Data System (ADS)

    Laplante, Phillip A.

    1992-03-01

    A single processing computer system using only half-adder circuits is described. In addition, it is shown that only a single hard-wired instruction is needed in the control unit to obtain a complete instruction set for this general purpose computer. Such a system has several advantages. First it is intrinsically a RISC machine--in fact the 'ultimate RISC' machine. Second, because only a single type of logic element is employed the entire computer system can be easily realized on a single, highly integrated chip. Finally, due to the homogeneous nature of the computer's logic elements, the computer has possible implementations as an optical or chemical machine. This in turn suggests possible paradigms for neural computing and artificial intelligence. After showing how we can implement a full-adder, min, max and other operations using the half-adder, we use an array of such full-adders to implement the dilation operation for two black and white images. Next we implement the erosion operation of two black and white images using a relative complement function and the properties of erosion and dilation. This approach was inspired by papers by van der Poel in which a single instruction is used to furnish a complete set of general purpose instructions and by Bohm- Jacopini where it is shown that any problem can be solved using a Turing machine with one entry and one exit.

  17. Personalized, relevance-based Multimodal Robotic Imaging and augmented reality for Computer Assisted Interventions.

    PubMed

    Navab, Nassir; Fellow, Miccai; Hennersperger, Christoph; Frisch, Benjamin; FĂĽrst, Bernhard

    2016-10-01

    In the last decade, many researchers in medical image computing and computer assisted interventions across the world focused on the development of the Virtual Physiological Human (VPH), aiming at changing the practice of medicine from classification and treatment of diseases to that of modeling and treating patients. These projects resulted in major advancements in segmentation, registration, morphological, physiological and biomechanical modeling based on state of art medical imaging as well as other sensory data. However, a major issue which has not yet come into the focus is personalizing intra-operative imaging, allowing for optimal treatment. In this paper, we discuss the personalization of imaging and visualization process with particular focus on satisfying the challenging requirements of computer assisted interventions. We discuss such requirements and review a series of scientific contributions made by our research team to tackle some of these major challenges. Copyright © 2016. Published by Elsevier B.V.

  18. Observation Denial and Performance of a Local Mesoscale Model

    NASA Technical Reports Server (NTRS)

    Watson, Leela R.; Bauman, William H., III

    2009-01-01

    .Forecasters at the 45th Weather Squadron (45 WS) use observations from the Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) wind tower network and the CCAFS (XMR) daily rawinsonde observations (RAOB) to issue and verify wind advisories and warnings for operations. These observations are also used by the National Weather Service (NWS) Spaceflight Meteorology Group (SMG) in Houston, Texas and the NWS Melbourne, Florida (NWS MLB) to initialize their locally-run mesoscale models. In addition, SMG uses these observations to support shuttle landings at the Shuttle Landing Facility (SLF). Due to impending budget cuts, some or all of the wind towers on the east-central Florida mainland and the XMR RAOBs may be eliminated. The locations of the mainland towers and XMR RAOB site are shown in Figure 1. The loss of these data may impact the forecast capability of the 45 WS, SMG and NWS MLB.

  19. Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.

    PubMed

    Park, Jongin; Wi, Seok-Min; Lee, Jin S

    2016-02-01

    Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix ĎI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.

  20. Complementary aspects of spatial resolution and signal-to-noise ratio in computational imaging

    NASA Astrophysics Data System (ADS)

    Gureyev, T. E.; Paganin, D. M.; Kozlov, A.; Nesterets, Ya. I.; Quiney, H. M.

    2018-05-01

    A generic computational imaging setup is considered which assumes sequential illumination of a semitransparent object by an arbitrary set of structured coherent illumination patterns. For each incident illumination pattern, all transmitted light is collected by a photon-counting bucket (single-pixel) detector. The transmission coefficients measured in this way are then used to reconstruct the spatial distribution of the object's projected transmission. It is demonstrated that the square of the spatial resolution of such a setup is usually equal to the ratio of the image area to the number of linearly independent illumination patterns. If the noise in the measured transmission coefficients is dominated by photon shot noise, then the ratio of the square of the mean signal to the noise variance is proportional to the ratio of the mean number of registered photons to the number of illumination patterns. The signal-to-noise ratio in a reconstructed transmission distribution is always lower if the illumination patterns are nonorthogonal, because of spatial correlations in the measured data. Examples of imaging methods relevant to the presented analysis include conventional imaging with a pixelated detector, computational ghost imaging, compressive sensing, super-resolution imaging, and computed tomography.

  1. Computer-assisted versus oral-and-written family history taking for identifying people with elevated risk of type 2 diabetes mellitus.

    PubMed

    Pappas, Yannis; Wei, Igor; Car, Josip; Majeed, Azeem; Sheikh, Aziz

    2011-12-07

    Diabetes is a chronic illness characterised by insulin resistance or deficiency, resulting in elevated glycosylated haemoglobin A1c (HbA1c) levels. Because diabetes tends to run in families, the collection of data is an important tool for identifying people with elevated risk of type2 diabetes. Traditionally, oral-and-written data collection methods are employed but computer-assisted history taking systems (CAHTS) are increasingly used. Although CAHTS were first described in the 1960s, there remains uncertainty about the impact of these methods on family history taking, clinical care and patient outcomes such as health-related quality of life.  To assess the effectiveness of computer-assisted versus oral-and-written family history taking for identifying people with elevated risk of developing type 2 diabetes mellitus. We searched The Cochrane Library (issue 6, 2011), MEDLINE (January 1985 to June 2011), EMBASE (January 1980 to June 2011) and CINAHL (January 1981 to June 2011). Reference lists of obtained articles were also pursued further and no limits were imposed on languages and publication status. Randomised controlled trials of computer-assisted versus oral-and-written history taking in adult participants (16 years and older). Two authors independently scanned the title and abstract of retrieved articles. Potentially relevant articles were investigated as full text. Studies that met the inclusion criteria were abstracted for relevant population and intervention characteristics with any disagreements resolved by discussion, or by a third party. Risk of bias was similarly assessed independently. We found no controlled trials on computer-assisted versus oral-and-written family history taking for identifying people with elevated risk of type 2 diabetes mellitus. There is a need to develop an evidence base to support the effective development and use of computer-assisted history taking systems in this area of practice. In the absence of evidence on effectiveness

  2. Computer-aided interpretation approach for optical tomographic images

    NASA Astrophysics Data System (ADS)

    Klose, Christian D.; Klose, Alexander D.; Netz, Uwe J.; Scheel, Alexander K.; Beuthan, JĂĽrgen; Hielscher, Andreas H.

    2010-11-01

    A computer-aided interpretation approach is proposed to detect rheumatic arthritis (RA) in human finger joints using optical tomographic images. The image interpretation method employs a classification algorithm that makes use of a so-called self-organizing mapping scheme to classify fingers as either affected or unaffected by RA. Unlike in previous studies, this allows for combining multiple image features, such as minimum and maximum values of the absorption coefficient for identifying affected and not affected joints. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index, and mutual information. Different methods (i.e., clinical diagnostics, ultrasound imaging, magnet resonance imaging, and inspection of optical tomographic images), were used to produce ground truth benchmarks to determine the performance of image interpretations. Using data from 100 finger joints, findings suggest that some parameter combinations lead to higher sensitivities, while others to higher specificities when compared to single parameter classifications employed in previous studies. Maximum performances are reached when combining the minimum/maximum ratio of the absorption coefficient and image variance. In this case, sensitivities and specificities over 0.9 can be achieved. These values are much higher than values obtained when only single parameter classifications were used, where sensitivities and specificities remained well below 0.8.

  3. Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography.

    PubMed

    Sidky, Emil Y; Kraemer, David N; Roth, Erin G; Ullberg, Christer; Reiser, Ingrid S; Pan, Xiaochuan

    2014-10-03

    One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data.

  4. Comprehensive Modeling and Visualization of Cardiac Anatomy and Physiology from CT Imaging and Computer Simulations

    PubMed Central

    Sun, Peng; Zhou, Haoyin; Ha, Seongmin; Hartaigh, BrĂ­ain Ăł; Truong, Quynh A.; Min, James K.

    2016-01-01

    In clinical cardiology, both anatomy and physiology are needed to diagnose cardiac pathologies. CT imaging and computer simulations provide valuable and complementary data for this purpose. However, it remains challenging to gain useful information from the large amount of high-dimensional diverse data. The current tools are not adequately integrated to visualize anatomic and physiologic data from a complete yet focused perspective. We introduce a new computer-aided diagnosis framework, which allows for comprehensive modeling and visualization of cardiac anatomy and physiology from CT imaging data and computer simulations, with a primary focus on ischemic heart disease. The following visual information is presented: (1) Anatomy from CT imaging: geometric modeling and visualization of cardiac anatomy, including four heart chambers, left and right ventricular outflow tracts, and coronary arteries; (2) Function from CT imaging: motion modeling, strain calculation, and visualization of four heart chambers; (3) Physiology from CT imaging: quantification and visualization of myocardial perfusion and contextual integration with coronary artery anatomy; (4) Physiology from computer simulation: computation and visualization of hemodynamics (e.g., coronary blood velocity, pressure, shear stress, and fluid forces on the vessel wall). Substantially, feedback from cardiologists have confirmed the practical utility of integrating these features for the purpose of computer-aided diagnosis of ischemic heart disease. PMID:26863663

  5. The Effect of Experimental Variables on Industrial X-Ray Micro-Computed Sensitivity

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Rauser, Richard W.

    2014-01-01

    A study was performed on the effect of experimental variables on radiographic sensitivity (image quality) in x-ray micro-computed tomography images for a high density thin wall metallic cylinder containing micro-EDM holes. Image quality was evaluated in terms of signal-to-noise ratio, flaw detectability, and feature sharpness. The variables included: day-to-day reproducibility, current, integration time, voltage, filtering, number of frame averages, number of projection views, beam width, effective object radius, binning, orientation of sample, acquisition angle range (180deg to 360deg), and directional versus transmission tube.

  6. Synthesizing parallel imaging applications using the CAP (computer-aided parallelization) tool

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Mazzariol, Marc; Messerli, Vincent; Hersch, Roger D.

    1997-12-01

    Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task: writing and debugging the application is difficult (deadlocks), programs may not be portable from one parallel architecture to the other, and performance often comes short of expectations. In order to facilitate the development of parallel applications, we propose the CAP computer-aided parallelization tool which enables application programmers to specify at a high-level of abstraction the flow of data between pipelined-parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. This paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. This paper's contribution is (1) to show how such implementations can be compactly specified in CAP, and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurements.

  7. Tse computers. [ultrahigh speed optical processing for two dimensional binary image

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.; Strong, J. P., III

    1977-01-01

    An ultra-high-speed computer that utilizes binary images as its basic computational entity is being developed. The basic logic components perform thousands of operations simultaneously. Technologies of the fiber optics, display, thin film, and semiconductor industries are being utilized in the building of the hardware.

  8. Cone Beam Computed Tomographic imaging in orthodontics.

    PubMed

    Scarfe, W C; Azevedo, B; Toghyani, S; Farman, A G

    2017-03-01

    Over the last 15 years, cone beam computed tomographic (CBCT) imaging has emerged as an important supplemental radiographic technique for orthodontic diagnosis and treatment planning, especially in situations which require an understanding of the complex anatomic relationships and surrounding structures of the maxillofacial skeleton. CBCT imaging provides unique features and advantages to enhance orthodontic practice over conventional extraoral radiographic imaging. While it is the responsibility of each practitioner to make a decision, in tandem with the patient/family, consensus-derived, evidence-based clinical guidelines are available to assist the clinician in the decision-making process. Specific recommendations provide selection guidance based on variables such as phase of treatment, clinically-assessed treatment difficulty, the presence of dental and/or skeletal modifying conditions, and pathology. CBCT imaging in orthodontics should always be considered wisely as children have conservatively, on average, a three to five times greater radiation risk compared with adults for the same exposure. The purpose of this paper is to provide an understanding of the operation of CBCT equipment as it relates to image quality and dose, highlight the benefits of the technique in orthodontic practice, and provide guidance on appropriate clinical use with respect to radiation dose and relative risk, particularly for the paediatric patient. © 2017 Australian Dental Association.

  9. Computed tomographic imaging of stapes implants.

    PubMed

    Warren, Frank M; Riggs, Sterling; Wiggins, Richard H

    2008-08-01

    Computed tomographic (CT) imaging of stapes prostheses is inaccurate. Clinical situations arise in which it would be helpful to determine the depth of penetration of a stapes prosthesis into the vestibule. The accuracy of CT imaging for this purpose has not been defined. This study was aimed to determine the accuracy of CT imaging to predict the depth of intrusion of stapes prostheses into the vestibule. The measurement of stapes prostheses by CT scan was compared with physical measurements in 8 cadaveric temporal bones. The depth of intrusion into the vestibule of the piston was underestimated in specimens with the fluoroplastic piston by a mean of 0.5 mm when compared with the measurements obtained in the temporal bones. The depth of penetration of the stainless steel implant was overestimated by 0.5 mm when compared with that in the temporal bone. The type of implant must be taken into consideration when estimating the depth of penetration into the vestibule using CT scanning because the imaging characteristics of the implanted materials differ. The position of fluoroplastic pistons cannot be accurately measured in the vestibule. Metallic implants are well visualized, and measurements exceeding 2.2 mm increase the suspicion of otolithic impingement. Special reconstructions along the length of the piston may be more accurate in estimating the position of stapes implants.

  10. Quantitative evaluation of 3D images produced from computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Sheerin, David T.; Mason, Ian R.; Cameron, Colin D.; Payne, Douglas A.; Slinger, Christopher W.

    1999-08-01

    Advances in computing and optical modulation techniques now make it possible to anticipate the generation of near real- time, reconfigurable, high quality, three-dimensional images using holographic methods. Computer generated holography (CGH) is the only technique which holds promise of producing synthetic images having the full range of visual depth cues. These realistic images will be viewable by several users simultaneously, without the need for headtracking or special glasses. Such a data visualization tool will be key to speeding up the manufacture of new commercial and military equipment by negating the need for the production of physical 3D models in the design phase. DERA Malvern has been involved in designing and testing fixed CGH in order to understand the connection between the complexity of the CGH, the algorithms used to design them, the processes employed in their implementation and the quality of the images produced. This poster describes results from CGH containing up to 108 pixels. The methods used to evaluate the reconstructed images are discussed and quantitative measures of image fidelity made. An understanding of the effect of the various system parameters upon final image quality enables a study of the possible system trade-offs to be carried out. Such an understanding of CGH production and resulting image quality is key to effective implementation of a reconfigurable CGH system currently under development at DERA.

  11. Computational Phase Imaging for Biomedical Applications

    NASA Astrophysics Data System (ADS)

    Nguyen, Tan Huu

    When a sample is illuminated by an imaging field, its fingerprints are left on the amplitude and the phase of the emerging wave. Capturing the information of the wavefront grants us a deeper understanding of the optical properties of the sample, and of the light-matter interaction. While the amplitude information has been intensively studied, the use of the phase information has been less common. Because all detectors are sensitive to intensity, not phase, wavefront measurements are significantly more challenging. Deploying optical interferometry to measure phase through phase-intensity conversion, quantitative phase imaging (QPI) has recently gained tremendous success in material and life sciences. The first topic of this dissertation describes our effort to develop a new QPI setup, named transmission Spatial Light Interference Microscopy (tSLIM), that uses the twisted nematic liquid-crystal (TNLC) modulators. Compared to the established SLIM technique, tSLIM is much less expensive to build than its predecessor (SLIM) while maintaining significant performance. The tSLIM system uses parallel aligned liquid-crystal (PANLC) modulators, has a slightly smaller signal-to-noise Ratio (SNR), and a more complicated model for the image formation. However, such complexity is well addressed by computing. Most importantly, tSLIM uses TNLC modulators that are popular in display LCDs. Therefore, the total cost of the system is significantly reduced. Alongside developing new imaging modalities, we also improved current QPI imaging systems. In practice, an incident field to the sample is rarely perfectly spatially coherent, i.e., plane wave. It is generally partially coherent; i.e., it comprises of many incoherent plane waves coming from multiple directions. This illumination yields artifacts in the phase measurement results, e.g., halo and phase-underestimation. One solution is using a very bright source, e.g., a laser, which can be spatially filtered very well. However, the

  12. Breast MRI radiomics: comparison of computer- and human-extracted imaging phenotypes.

    PubMed

    Sutton, Elizabeth J; Huang, Erich P; Drukker, Karen; Burnside, Elizabeth S; Li, Hui; Net, Jose M; Rao, Arvind; Whitman, Gary J; Zuley, Margarita; Ganott, Marie; Bonaccio, Ermelinda; Giger, Maryellen L; Morris, Elizabeth A

    2017-01-01

    In this study, we sought to investigate if computer-extracted magnetic resonance imaging (MRI) phenotypes of breast cancer could replicate human-extracted size and Breast Imaging-Reporting and Data System (BI-RADS) imaging phenotypes using MRI data from The Cancer Genome Atlas (TCGA) project of the National Cancer Institute. Our retrospective interpretation study involved analysis of Health Insurance Portability and Accountability Act-compliant breast MRI data from The Cancer Imaging Archive, an open-source database from the TCGA project. This study was exempt from institutional review board approval at Memorial Sloan Kettering Cancer Center and the need for informed consent was waived. Ninety-one pre-operative breast MRIs with verified invasive breast cancers were analysed. Three fellowship-trained breast radiologists evaluated the index cancer in each case according to size and the BI-RADS lexicon for shape, margin, and enhancement (human-extracted image phenotypes [HEIP]). Human inter-observer agreement was analysed by the intra-class correlation coefficient (ICC) for size and Krippendorff's α for other measurements. Quantitative MRI radiomics of computerised three-dimensional segmentations of each cancer generated computer-extracted image phenotypes (CEIP). Spearman's rank correlation coefficients were used to compare HEIP and CEIP. Inter-observer agreement for HEIP varied, with the highest agreement seen for size (ICC 0.679) and shape (ICC 0.527). The computer-extracted maximum linear size replicated the human measurement with p  < 10 -12 . CEIP of shape, specifically sphericity and irregularity, replicated HEIP with both p values < 0.001. CEIP did not demonstrate agreement with HEIP of tumour margin or internal enhancement. Quantitative radiomics of breast cancer may replicate human-extracted tumour size and BI-RADS imaging phenotypes, thus enabling precision medicine.

  13. Pathomorphism of spiral tibial fractures in computed tomography imaging.

    PubMed

    Guzik, Grzegorz

    2011-01-01

    Spiral fractures of the tibia are virtually homogeneous with regard to their pathomorphism. The differences that are seen concern the level of fracture of the fibula, and, to a lesser extent, the level of fracture of the tibia, the length of fracture cleft, and limb shortening following the trauma. While conventional radiographs provide sufficient information about the pathomorphism of fractures, computed tomography can be useful in demonstrating the spatial arrangement of bone fragments and topography of soft tissues surrounding the fracture site. Multiple cross-sectional computed tomography views of spiral fractures of the tibia show the details of the alignment of bone chips at the fracture site, axis of the tibial fracture cleft, and topography of soft tissues that are not visible on standard radiographs. A model of a spiral tibial fracture reveals periosteal stretching with increasing spiral and longitudinal displacement. The cleft in tibial fractures has a spiral shape and its line is invariable. Every spiral fracture of both crural bones results in extensive damage to the periosteum and may damage bellies of the long flexor muscle of toes, flexor hallucis longus as well as the posterior tibial muscle. Computed tomography images of spiral fractures of the tibia show details of damage that are otherwise invisible on standard radiographs. Moreover, CT images provide useful information about the spatial location of the bone chips as well as possible threats to soft tissues that surround the fracture site. Every spiral fracture of the tibia is associated with disruption of the periosteum. 1. Computed tomography images of spiral fractures of the tibia show details of damage otherwise invisible on standard radiographs, 2. The sharp end of the distal tibial chip can damage the tibialis posterior muscle, long flexor muscles of the toes and the flexor hallucis longus, 3. Every spiral fracture of the tibia is associated with disruption of the periosteum.

  14. Grid Computing Application for Brain Magnetic Resonance Image Processing

    NASA Astrophysics Data System (ADS)

    Valdivia, F.; Crépeault, B.; Duchesne, S.

    2012-02-01

    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  15. Dendrimer-stabilized bismuth sulfide nanoparticles: synthesis, characterization, and potential computed tomography imaging applications.

    PubMed

    Fang, Yi; Peng, Chen; Guo, Rui; Zheng, Linfeng; Qin, Jinbao; Zhou, Benqing; Shen, Mingwu; Lu, Xinwu; Zhang, Guixiang; Shi, Xiangyang

    2013-06-07

    We report here a general approach to synthesizing dendrimer-stabilized bismuth sulfide nanoparticles (Bi2S3 DSNPs) for potential computed tomography (CT) imaging applications. In this study, ethylenediamine core glycidol hydroxyl-terminated generation 4 poly(amidoamine) dendrimers (G4.NGlyOH) were used as stabilizers to first complex the Bi(III) ions, followed by reaction with hydrogen sulfide to generate Bi2S3 DSNPs. By varying the molar ratio of Bi atom to dendrimer, stable Bi2S3 DSNPs with an average size range of 5.2-5.7 nm were formed. The formed Bi2S3 DSNPs were characterized via different techniques. X-ray absorption coefficient measurements show that the attenuation of Bi2S3 DSNPs is much higher than that of iodine-based CT contrast agent at the same molar concentration of the active element (Bi versus iodine). 3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) cell viability assay and hemolysis assay reveal that the formed Bi2S3 DSNPs are noncytotoxic and have a negligible hemolysis effect in the studied concentration range. Furthermore, we show that cells incubated with the Bi2S3 DSNPs are able to be imaged using CT, a prominent enhancement at the point of rabbit injected subcutaneously with the Bi2S3 DSNPs is able to be visualized via CT scanning, and the mouse's pulmonary vein can be visualized via CT after intravenous injection of the Bi2S3 DSNPs. With the good biocompatibility, enhanced X-ray attenuation property, and tunable dendrimer chemistry, the designed Bi2S3 DSNPs should be able to be further functionalized, allowing them to be used as a highly efficient contrast agent for CT imaging of different biological systems.

  16. Can axial-based nodal size criteria be used in other imaging planes to accurately determine "enlarged" head and neck lymph nodes?

    PubMed

    Bartlett, Eric S; Walters, Thomas D; Yu, Eugene

    2013-01-01

    Objective. We evaluate if axial-based lymph node size criteria can be applied to coronal and sagittal planes. Methods. Fifty pretreatment computed tomographic (CT) neck exams were evaluated in patients with head and neck squamous cell carcinoma (SCCa) and neck lymphadenopathy. Axial-based size criteria were applied to all 3 imaging planes, measured, and classified as "enlarged" if equal to or exceeding size criteria. Results. 222 lymph nodes were "enlarged" in one imaging plane; however, 53.2% (118/222) of these were "enlarged" in all 3 planes. Classification concordance between axial versus coronal/sagittal planes was poor (kappa = -0.09 and -0.07, resp., P < 0.05). The McNemar test showed systematic misclassification when comparing axial versus coronal (P < 0.001) and axial versus sagittal (P < 0.001) planes. Conclusion. Classification of "enlarged" lymph nodes differs between axial versus coronal/sagittal imaging planes when axial-based nodal size criteria are applied independently to all three imaging planes, and exclusively used without other morphologic nodal data.

  17. MO-G-17A-02: Computer Simulation Studies for On-Board Functional and Molecular Imaging of the Prostate Using a Robotic Multi-Pinhole SPECT System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, L; Duke University Medical Center, Durham, NC; Fudan University Shanghai Cancer Center, Shanghai

    Purpose: To investigate prostate imaging onboard radiation therapy machines using a novel robotic, 49-pinhole Single Photon Emission Computed Tomography (SPECT) system. Methods: Computer-simulation studies were performed for region-of-interest (ROI) imaging using a 49-pinhole SPECT collimator and for broad cross-section imaging using a parallel-hole SPECT collimator. A male XCAT phantom was computersimulated in supine position with one 12mm-diameter tumor added in the prostate. A treatment couch was added to the phantom. Four-minute detector trajectories for imaging a 7cm-diameter-sphere ROI encompassing the tumor were investigated with different parameters, including pinhole focal length, pinhole diameter and trajectory starting angle. Pseudo-random Poisson noise wasmore » included in the simulated projection data, and SPECT images were reconstructed by OSEM with 4 subsets and up to 10 iterations. Images were evaluated by visual inspection, profiles, and Root-Mean- Square-Error (RMSE). Results: The tumor was well visualized above background by the 49-pinhole SPECT system with different pinhole parameters while it was not visible with parallel-hole SPECT imaging. Minimum RMSEs were 0.30 for 49-pinhole imaging and 0.41 for parallelhole imaging. For parallel-hole imaging, the detector trajectory from rightto- left yielded slightly lower RMSEs than that from posterior to anterior. For 49-pinhole imaging, near-minimum RMSEs were maintained over a broader range of OSEM iterations with a 5mm pinhole diameter and 21cm focal length versus a 2mm diameter pinhole and 18cm focal length. The detector with 21cm pinhole focal length had the shortest rotation radius averaged over the trajectory. Conclusion: On-board functional and molecular prostate imaging may be feasible in 4-minute scan times by robotic SPECT. A 49-pinhole SPECT system could improve such imaging as compared to broadcross-section parallel-hole collimated SPECT imaging. Multi-pinhole imaging can be improved by

  18. High-pitch Helical Dual-source Computed Tomographic Pulmonary Angiography: Comparing Image Quality in Inspiratory Breath-hold and During Free Breathing.

    PubMed

    Ajlan, Amr M; Binzaqr, Salma; Jadkarim, Dalia A; Jamjoom, Lamia G; Leipsic, Jonathon

    2016-01-01

    The purpose of this study was to compare qualitative and quantitative image parameters of dual-source high-pitch helical computed tomographic pulmonary angiography (CTPA) in breath-holding (BH) versus free-breathing (FB) patients. Ninety-nine consented patients (61 female individuals; mean age±SD, 49±18.7 y) were randomized into BH (n=45) versus FB (n=54) high-pitch helical CTPA. Patient characteristics and CTPA radiation doses were analyzed. Two readers assessed for pulmonary embolism (PE), transient interruption of contrast, and respiratory and cardiac motion. The readers used a subjective 3-point scale to rate the pulmonary artery opacification and lung parenchymal appearance. A single reader assessed mean pulmonary artery signal intensity, noise, contrast, signal to noise ratio, and contrast to noise ratio. PE was diagnosed in 16% BH and 19% FB patients. CTPAs of both groups were of excellent or acceptable quality for PE evaluation and of similar mean radiation doses (1.3 mSv). Transient interruption of contrast was seen in 5/45 (11%) BH and 5/54 (9%) FB patients (not statistically significant, P=0.54). No statistically significant difference was noted in cardiac, diaphragmatic, and lung parenchymal motion. Lung parenchymal assessment was excellent in all cases, except for 5/54 (9%) motion-affected FB cases with acceptable quality (statistically significant, P=0.03). No CTPA was considered nondiagnostic by any of the readers. No objective image quality differences were noted between both groups (P>0.05). High-pitch helical CTPA acquired during BH or in FB yields comparable image quality for the diagnosis of PE and lung pathology, with low radiation exposure. Only a modest increase in lung parenchymal artifacts is encountered in FB high-pitch helical CTPA.

  19. Computer-Aided Diagnosis in Medical Imaging: Historical Review, Current Status and Future Potential

    PubMed Central

    Doi, Kunio

    2007-01-01

    Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. In this article, the motivation and philosophy for early development of CAD schemes are presented together with the current status and future potential of CAD in a PACS environment. With CAD, radiologists use the computer output as a “second opinion” and make the final decisions. CAD is a concept established by taking into account equally the roles of physicians and computers, whereas automated computer diagnosis is a concept based on computer algorithms only. With CAD, the performance by computers does not have to be comparable to or better than that by physicians, but needs to be complementary to that by physicians. In fact, a large number of CAD systems have been employed for assisting physicians in the early detection of breast cancers on mammograms. A CAD scheme that makes use of lateral chest images has the potential to improve the overall performance in the detection of lung nodules when combined with another CAD scheme for PA chest images. Because vertebral fractures can be detected reliably by computer on lateral chest radiographs, radiologists’ accuracy in the detection of vertebral fractures would be improved by the use of CAD, and thus early diagnosis of osteoporosis would become possible. In MRA, a CAD system has been developed for assisting radiologists in the detection of intracranial aneurysms. On successive bone scan images, a CAD scheme for detection of interval changes has been developed by use of temporal subtraction images. In the future, many CAD schemes could be assembled as packages and implemented as a part of PACS. For example, the package for chest CAD may include the computerized detection of lung nodules, interstitial opacities, cardiomegaly, vertebral fractures, and interval changes in chest radiographs as well as the computerized classification of benign and malignant nodules and the differential diagnosis of

  20. Medical imaging and registration in computer assisted surgery.

    PubMed

    Simon, D A; Lavallée, S

    1998-09-01

    Imaging, sensing, and computing technologies that are being introduced to aid in the planning and execution of surgical procedures are providing orthopaedic surgeons with a powerful new set of tools for improving clinical accuracy, reliability, and patient outcomes while reducing costs and operating times. Current computer assisted surgery systems typically include a measurement process for collecting patient specific medical data, a decision making process for generating a surgical plan, a registration process for aligning the surgical plan to the patient, and an action process for accurately achieving the goals specified in the plan. Some of the key concepts in computer assisted surgery applied to orthopaedics with a focus on the basic framework and underlying technologies is outlined. In addition, technical challenges and future trends in the field are discussed.

  1. Fuzzy Markov random fields versus chains for multispectral image segmentation.

    PubMed

    Salzenstein, Fabien; Collet, Christophe

    2006-11-01

    This paper deals with a comparison of recent statistical models based on fuzzy Markov random fields and chains for multispectral image segmentation. The fuzzy scheme takes into account discrete and continuous classes which model the imprecision of the hidden data. In this framework, we assume the dependence between bands and we express the general model for the covariance matrix. A fuzzy Markov chain model is developed in an unsupervised way. This method is compared with the fuzzy Markovian field model previously proposed by one of the authors. The segmentation task is processed with Bayesian tools, such as the well-known MPM (Mode of Posterior Marginals) criterion. Our goal is to compare the robustness and rapidity for both methods (fuzzy Markov fields versus fuzzy Markov chains). Indeed, such fuzzy-based procedures seem to be a good answer, e.g., for astronomical observations when the patterns present diffuse structures. Moreover, these approaches allow us to process missing data in one or several spectral bands which correspond to specific situations in astronomy. To validate both models, we perform and compare the segmentation on synthetic images and raw multispectral astronomical data.

  2. Computed tomography angiography in acute stroke (revisiting the 4Ps of imaging).

    PubMed

    Varadharajan, Shriram; Saini, Jitender; Acharya, Ullas V; Gupta, Arun Kumar

    2016-02-01

    Imaging in acute stroke has traditionally focussed on the 4Ps-parenchyma, pipes, perfusion, and penumbra-and has increasingly relied upon advanced techniques including magnetic resonance imaging to evaluate such patients. However, as per European Magnetic Resonance Forum estimates, the availability of magnetic resonance imaging scanners for the general population in India (0.5 per million inhabitants) is quite low as compared to Europe (11 per million) and United States (35 per million), with most of them only present in urban cities. On the other hand, computed tomography (CT) is more widely available and has reduced scanning duration. Computed tomography angiography of cervical and intracranial vessels is relatively simpler to perform with extended coverage and can provide all pertinent information required in such patients. This imaging review will discuss relevant imaging findings on CT angiography in patients with acute ischemic stroke through illustrated cases. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. NCI Workshop Report: Clinical and Computational Requirements for Correlating Imaging Phenotypes with Genomics Signatures.

    PubMed

    Colen, Rivka; Foster, Ian; Gatenby, Robert; Giger, Mary Ellen; Gillies, Robert; Gutman, David; Heller, Matthew; Jain, Rajan; Madabhushi, Anant; Madhavan, Subha; Napel, Sandy; Rao, Arvind; Saltz, Joel; Tatum, James; Verhaak, Roeland; Whitman, Gary

    2014-10-01

    The National Cancer Institute (NCI) Cancer Imaging Program organized two related workshops on June 26-27, 2013, entitled "Correlating Imaging Phenotypes with Genomics Signatures Research" and "Scalable Computational Resources as Required for Imaging-Genomics Decision Support Systems." The first workshop focused on clinical and scientific requirements, exploring our knowledge of phenotypic characteristics of cancer biological properties to determine whether the field is sufficiently advanced to correlate with imaging phenotypes that underpin genomics and clinical outcomes, and exploring new scientific methods to extract phenotypic features from medical images and relate them to genomics analyses. The second workshop focused on computational methods that explore informatics and computational requirements to extract phenotypic features from medical images and relate them to genomics analyses and improve the accessibility and speed of dissemination of existing NIH resources. These workshops linked clinical and scientific requirements of currently known phenotypic and genotypic cancer biology characteristics with imaging phenotypes that underpin genomics and clinical outcomes. The group generated a set of recommendations to NCI leadership and the research community that encourage and support development of the emerging radiogenomics research field to address short-and longer-term goals in cancer research.

  4. Study on computer-aided diagnosis of hepatic MR imaging and mammography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Xuejun

    2005-04-01

    It is well known that the liver is an organ easily attacked by diseases. The purpose of this study is to develop a computer-aided diagnosis (CAD) scheme for helping radiologists to differentiate hepatic diseases more efficiently. Our software named LIVERANN integrated the magnetic resonance (MR) imaging findings with different pulse sequences to classify the five categories of hepatic diseases by using the artificial neural network (ANN) method. The intensity and homogeneity within the region of interest (ROI) delineated by a radiologist were automatically calculated to obtain numerical data by the program for input signals to the ANN. Outputs were themore » five pathological categories of hepatic diseases (hepatic cyst, hepatocellular carcinoma, dysplasia in cirrhosis, cavernous hemangioma, and metastasis). The experiment demonstrated a testing accuracy of 93% from 80 patients. In order to differentiate the cirrhosis from normal liver, the volume ratio of left to whole (LTW) was proposed to quantify the degree of cirrhosis by three-dimensional (3D) volume analysis. The liver region was firstly extracted from computed tomography (CT) or MR slices based on edge detection algorithms, and then separated into left lobe and right lobe by the hepatic umbilical fissure. The volume ratio of these two parts showed that the LTW ratio in the liver was significantly improved in the differentiation performance, with (25.6%{+-}4.3%) in cirrhosis versus the normal liver (16.4%{+-}5.4%). In addition, the application of the ANN method for detecting clustered microcalcifications in masses on mammograms was described here as well. A new structural ANN, so-called a shift-invariant artificial neural network (SIANN), was integrated with our triple-ring filter (TRF) method in our CAD system. As the result, the sensitivity of detecting clusters was improved from 90% by our previous TRF method to 95% by using both SIANN and TRF.« less

  5. Quantitative computational infrared imaging of buoyant diffusion flames

    NASA Astrophysics Data System (ADS)

    Newale, Ashish S.

    Studies of infrared radiation from turbulent buoyant diffusion flames impinging on structural elements have applications to the development of fire models. A numerical and experimental study of radiation from buoyant diffusion flames with and without impingement on a flat plate is reported. Quantitative images of the radiation intensity from the flames are acquired using a high speed infrared camera. Large eddy simulations are performed using fire dynamics simulator (FDS version 6). The species concentrations and temperature from the simulations are used in conjunction with a narrow-band radiation model (RADCAL) to solve the radiative transfer equation. The computed infrared radiation intensities rendered in the form of images and compared with the measurements. The measured and computed radiation intensities reveal necking and bulging with a characteristic frequency of 7.1 Hz which is in agreement with previous empirical correlations. The results demonstrate the effects of stagnation point boundary layer on the upstream buoyant shear layer. The coupling between these two shear layers presents a model problem for sub-grid scale modeling necessary for future large eddy simulations.

  6. Image quality and artefact generation post-cerebral aneurysm clipping using a 64-row multislice computer tomography angiography (MSCTA) technology: A retrospective study and review of the literature.

    PubMed

    Zachenhofer, Iris; Cejna, Manfred; Schuster, Antonius; Donat, Markus; Roessler, Karl

    2010-06-01

    Computed tomography angiography (CTA) is a time and cost saving investigation for postoperative evaluation of clipped cerebral aneurysm patients. A retrospective study was conducted to analyse image quality and artefact generation due to implanted aneurysm clips using a new technology. MSCTA was performed pre- and postoperatively using a Philips Brilliance 64-detector-row CT scanner. Altogether, 32 clipping sites were analysed in 27 patients (11 female and 16 male, mean ages 52a, from 24 to 72 years). Clip number per aneurysm was 2.3 mean (from 1 to 4), 54 clips were made of titanium alloy and 5 of cobalt alloy. Altogether, image quality was rated 1.8 mean, using a scale from 1 (very good) to 5 (unserviceable) and clip artefacts were rated 2.4 mean, using a 5 point rating scale (1 no artefacts, 5 unserviceable due to artefacts). A significant loss of image quality and rise of artefacts was found when using cobalt alloy clips (1.4 versus 4.2 and 2.1 versus 4.0). In 72% of all investigations, an excellent image quality was found. Excluding the cobalt clip group, 85% of scans showed excellent image quality. Artefacts were absent or minimal (grade 1 or 2) in 69% of all investigations and in 81% in the pure titanium clip group. In 64-row MSCTA of good image quality with low artefacts, it was possible to detect small aneurysm remnants of 2mm size in individual patients. By using titanium alloy clips, in our study up to 85% of postoperative CTA images were of excellent quality with absent or minimal artefacts in 81% and seem adequate to detect small aneurysm remnants. Copyright 2010 Elsevier B.V. All rights reserved.

  7. Evaluating Imaging and Computer-aided Detection and Diagnosis Devices at the FDA

    PubMed Central

    Gallas, Brandon D.; Chan, Heang-Ping; D’Orsi, Carl J.; Dodd, Lori E.; Giger, Maryellen L.; Gur, David; Krupinski, Elizabeth A.; Metz, Charles E.; Myers, Kyle J.; Obuchowski, Nancy A.; Sahiner, Berkman; Toledano, Alicia Y.; Zuley, Margarita L.

    2017-01-01

    This report summarizes the Joint FDA-MIPS Workshop on Methods for the Evaluation of Imaging and Computer-Assist Devices. The purpose of the workshop was to gather information on the current state of the science and facilitate consensus development on statistical methods and study designs for the evaluation of imaging devices to support US Food and Drug Administration submissions. Additionally, participants expected to identify gaps in knowledge and unmet needs that should be addressed in future research. This summary is intended to document the topics that were discussed at the meeting and disseminate the lessons that have been learned through past studies of imaging and computer-aided detection and diagnosis device performance. PMID:22306064

  8. The Role and Design of Screen Images in Software Documentation.

    ERIC Educational Resources Information Center

    van der Meij, Hans

    2000-01-01

    Discussion of learning a new computer software program focuses on how to support the joint handling of a manual, input devices, and screen display. Describes a study that examined three design styles for manuals that included screen images to reduce split-attention problems and discusses theory versus practice and cognitive load theory.…

  9. Reduction of Claustrophobia with Short-Bore versus Open Magnetic Resonance Imaging: A Randomized Controlled Trial

    PubMed Central

    Rief, Matthias; Martus, Peter; Klingebiel, Randolf; Asbach, Patrick; Klessen, Christian; Diederichs, Gerd; Wagner, Moritz; Teichgräber, Ulf; Bengner, Thomas; Hamm, Bernd; Dewey, Marc

    2011-01-01

    Background Claustrophobia is a common problem precluding MR imaging. The purpose of the present study was to assess whether a short-bore or an open magnetic resonance (MR) scanner is superior in alleviating claustrophobia. Methods Institutional review board approval and patient informed consent were obtained to compare short-bore versus open MR. From June 2008 to August 2009, 174 patients (139 women; mean age = 53.1 [SD 12.8]) with an overall mean score of 2.4 (SD 0.7, range 0 to 4) on the Claustrophobia Questionnaire (CLQ) and a clinical indication for imaging, were randomly assigned to receive evaluation by open or by short-bore MR. The primary outcomes were incomplete MR examinations due to a claustrophobic event. Follow-up was conducted 7 months after MR imaging. The primary analysis was performed according to the intention-to-treat strategy. Results With 33 claustrophobic events in the short-bore group (39% [95% confidence interval [CI] 28% to 50%) versus 23 in the open scanner group (26% [95% CI 18% to 37%]; P = 0.08) the difference was not significant. Patients with an event were in the examination room for 3.8 min (SD 4.4) in the short-bore and for 8.5 min (SD 7) in the open group (P = 0.004). This was due to an earlier occurrence of events in the short-bore group. The CLQ suffocation subscale was significantly associated with the occurrence of claustrophobic events (P = 0.003). New findings that explained symptoms were found in 69% of MR examinations and led to changes in medical treatment in 47% and surgery in 10% of patients. After 7 months, perceived claustrophobia increased in 32% of patients with events versus in only 11% of patients without events (P = 0.004). Conclusions Even recent MR cannot prevent claustrophobia suggesting that further developments to create a more patient-centered MR scanner environment are needed. Trial Registration ClinicalTrials.gov NCT00715806 PMID:21887259

  10. Computer vision applications for coronagraphic optical alignment and image processing.

    PubMed

    Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A

    2013-05-10

    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  11. Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.

    PubMed

    Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A

    2017-05-01

    Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.

  12. Identification of natural images and computer-generated graphics based on statistical and textural features.

    PubMed

    Peng, Fei; Li, Jiao-ting; Long, Min

    2015-03-01

    To discriminate the acquisition pipelines of digital images, a novel scheme for the identification of natural images and computer-generated graphics is proposed based on statistical and textural features. First, the differences between them are investigated from the view of statistics and texture, and 31 dimensions of feature are acquired for identification. Then, LIBSVM is used for the classification. Finally, the experimental results are presented. The results show that it can achieve an identification accuracy of 97.89% for computer-generated graphics, and an identification accuracy of 97.75% for natural images. The analyses also demonstrate the proposed method has excellent performance, compared with some existing methods based only on statistical features or other features. The method has a great potential to be implemented for the identification of natural images and computer-generated graphics. © 2014 American Academy of Forensic Sciences.

  13. New Frontiers for Applications of Thermal Infrared Imaging Devices: Computational Psychopshysiology in the Neurosciences

    PubMed Central

    Cardone, Daniela; Merla, Arcangelo

    2017-01-01

    Thermal infrared imaging has been proposed, and is now used, as a tool for the non-contact and non-invasive computational assessment of human autonomic nervous activity and psychophysiological states. Thanks to a new generation of high sensitivity infrared thermal detectors and the development of computational models of the autonomic control of the facial cutaneous temperature, several autonomic variables can be computed through thermal infrared imaging, including localized blood perfusion rate, cardiac pulse rate, breath rate, sudomotor and stress responses. In fact, all of these parameters impact on the control of the cutaneous temperature. The physiological information obtained through this approach, could then be used to infer about a variety of psychophysiological or emotional states, as proved by the increasing number of psychophysiology or neurosciences studies that use thermal infrared imaging. This paper presents a review of the principal achievements of thermal infrared imaging in computational psychophysiology, focusing on the capability of the technique for providing ubiquitous and unwired monitoring of psychophysiological activity and affective states. It also presents a summary on the modern, up-to-date infrared sensors technology. PMID:28475155

  14. New Frontiers for Applications of Thermal Infrared Imaging Devices: Computational Psychopshysiology in the Neurosciences.

    PubMed

    Cardone, Daniela; Merla, Arcangelo

    2017-05-05

    Thermal infrared imaging has been proposed, and is now used, as a tool for the non-contact and non-invasive computational assessment of human autonomic nervous activity and psychophysiological states. Thanks to a new generation of high sensitivity infrared thermal detectors and the development of computational models of the autonomic control of the facial cutaneous temperature, several autonomic variables can be computed through thermal infrared imaging, including localized blood perfusion rate, cardiac pulse rate, breath rate, sudomotor and stress responses. In fact, all of these parameters impact on the control of the cutaneous temperature. The physiological information obtained through this approach, could then be used to infer about a variety of psychophysiological or emotional states, as proved by the increasing number of psychophysiology or neurosciences studies that use thermal infrared imaging. This paper presents a review of the principal achievements of thermal infrared imaging in computational psychophysiology, focusing on the capability of the technique for providing ubiquitous and unwired monitoring of psychophysiological activity and affective states. It also presents a summary on the modern, up-to-date infrared sensors technology.

  15. Image-based computer-assisted diagnosis system for benign paroxysmal positional vertigo

    NASA Astrophysics Data System (ADS)

    Kohigashi, Satoru; Nakamae, Koji; Fujioka, Hiromu

    2005-04-01

    We develop the image based computer assisted diagnosis system for benign paroxysmal positional vertigo (BPPV) that consists of the balance control system simulator, the 3D eye movement simulator, and the extraction method of nystagmus response directly from an eye movement image sequence. In the system, the causes and conditions of BPPV are estimated by searching the database for record matching with the nystagmus response for the observed eye image sequence of the patient with BPPV. The database includes the nystagmus responses for simulated eye movement sequences. The eye movement velocity is obtained by using the balance control system simulator that allows us to simulate BPPV under various conditions such as canalithiasis, cupulolithiasis, number of otoconia, otoconium size, and so on. Then the eye movement image sequence is displayed on the CRT by the 3D eye movement simulator. The nystagmus responses are extracted from the image sequence by the proposed method and are stored in the database. In order to enhance the diagnosis accuracy, the nystagmus response for a newly simulated sequence is matched with that for the observed sequence. From the matched simulation conditions, the causes and conditions of BPPV are estimated. We apply our image based computer assisted diagnosis system to two real eye movement image sequences for patients with BPPV to show its validity.

  16. Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography

    PubMed Central

    Sidky, Emil Y.; Kraemer, David N.; Roth, Erin G.; Ullberg, Christer; Reiser, Ingrid S.; Pan, Xiaochuan

    2014-01-01

    Abstract. One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data. PMID:25685824

  17. Software designs of image processing tasks with incremental refinement of computation.

    PubMed

    Anastasia, Davide; Andreopoulos, Yiannis

    2010-08-01

    Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.

  18. Extremely Large Magnetoresistance in a Topological Semimetal Candidate Pyrite PtBi2

    NASA Astrophysics Data System (ADS)

    Gao, Wenshuai; Hao, Ningning; Zheng, Fa-Wei; Ning, Wei; Wu, Min; Zhu, Xiangde; Zheng, Guolin; Zhang, Jinglei; Lu, Jianwei; Zhang, Hongwei; Xi, Chuanying; Yang, Jiyong; Du, Haifeng; Zhang, Ping; Zhang, Yuheng; Tian, Mingliang

    2017-06-01

    While pyrite-type PtBi2 with a face-centered cubic structure has been predicted to be a three-dimensional (3D) Dirac semimetal, experimental study of its physical properties remains absent. Here we report the angular-dependent magnetoresistance measurements of a PtBi2 single crystal under high magnetic fields. We observed extremely large unsaturated magnetoresistance (XMR) up to (11.2 Ă—106)% at T =1.8 K in a magnetic field of 33 T, which is comparable to the previously reported Dirac materials, such as WTe2 , LaSb, and NbP. The crystals exhibit an ultrahigh mobility and significant Shubnikov-de Hass quantum oscillations with a nontrivial Berry phase. The analysis of Hall resistivity indicates that the XMR can be ascribed to the nearly compensated electron and hole. Our experimental results associated with the ab initio calculations suggest that pyrite PtBi2 is a topological semimetal candidate that might provide a platform for exploring topological materials with XMR in noble metal alloys.

  19. Interpretation of forest characteristics from computer-generated images.

    Treesearch

    T.M. Barrett; H.R. Zuuring; T. Christopher

    2006-01-01

    The need for effective communication in the management and planning of forested landscapes has led to a substantial increase in the use of visual information. Using forest plots from California, Oregon, and Washington, and a survey of 183 natural resource professionals in these states, we examined the use of computer-generated images to convey information about forest...

  20. Analysis of methods to assess frontal sinus extent in osteoplastic flap surgery: transillumination versus 6-ft Caldwell versus image guidance.

    PubMed

    Melroy, Christopher T; Dubin, Marc G; Hardy, Stuart M; Senior, Brent A

    2006-01-01

    The aim of this study was to compare three common methods (transillumination, plain radiographs, and computerized tomography [CT] image guidance) for estimating the position and extent of pneumatization of the frontal sinus in osteoplastic flap surgery. Axial CT scans and 6-ft Caldwell radiographs were performed on 10 cadaver heads. For each head, soft tissue overlying the frontal bone was raised and the anticipated position and extent of the frontal sinus at four points was marked using three common methods. The silhouette of the frontal sinus from the Caldwell plain radiograph was excised and placed in position. Four points at the periphery also were made using information obtained from a passive optically guided image-guided surgery device, and transillumination via a frontal trephination also was used to estimate sinus extent. The true sinus size was measured at each point and compared with experimental values. The use of CT image guidance generated the least difference between measured and actual values (mean = 1.91 mm; SEM = 0.29); this method was found statistically superior to Caldwell (p = 0.040) and transillumination (p = 0.007). Image guidance did not overestimate the size of the sinus (0/36) and was quicker than the Caldwell approach (8.5 versus 11.5 minutes). There was no learning curve appreciated with image guidance. Accurate and precise estimation of the position and extent of the frontal sinus is crucial when performing osteoplastic flap surgery. Use of CT image guidance was statistically superior to Caldwell and transillumination methods and proved to be safe, reproducible, economic, and easy to learn.

  1. Phantom feet on digital radionuclide images and other scary computer tales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freitas, J.E.; Dworkin, H.J.; Dees, S.M.

    1989-09-01

    Malfunction of a computer-assisted digital gamma camera is reported. Despite what appeared to be adequate acceptance testing, an error in the system gave rise to switching of images and identification text. A suggestion is made for using a hot marker, which would avoid the potential error of misinterpretation of patient images.

  2. Edge co-occurrences can account for rapid categorization of natural versus animal images

    NASA Astrophysics Data System (ADS)

    Perrinet, Laurent U.; Bednar, James A.

    2015-06-01

    Making a judgment about the semantic category of a visual scene, such as whether it contains an animal, is typically assumed to involve high-level associative brain areas. Previous explanations require progressively analyzing the scene hierarchically at increasing levels of abstraction, from edge extraction to mid-level object recognition and then object categorization. Here we show that the statistics of edge co-occurrences alone are sufficient to perform a rough yet robust (translation, scale, and rotation invariant) scene categorization. We first extracted the edges from images using a scale-space analysis coupled with a sparse coding algorithm. We then computed the “association field” for different categories (natural, man-made, or containing an animal) by computing the statistics of edge co-occurrences. These differed strongly, with animal images having more curved configurations. We show that this geometry alone is sufficient for categorization, and that the pattern of errors made by humans is consistent with this procedure. Because these statistics could be measured as early as the primary visual cortex, the results challenge widely held assumptions about the flow of computations in the visual system. The results also suggest new algorithms for image classification and signal processing that exploit correlations between low-level structure and the underlying semantic category.

  3. Computer image analysis of etched tracks from ionizing radiation

    NASA Technical Reports Server (NTRS)

    Blanford, George E.

    1994-01-01

    I proposed to continue a cooperative research project with Dr. David S. McKay concerning image analysis of tracks. Last summer we showed that we could measure track densities using the Oxford Instruments eXL computer and software that is attached to an ISI scanning electron microscope (SEM) located in building 31 at JSC. To reduce the dependence on JSC equipment, we proposed to transfer the SEM images to UHCL for analysis. Last summer we developed techniques to use digitized scanning electron micrographs and computer image analysis programs to measure track densities in lunar soil grains. Tracks were formed by highly ionizing solar energetic particles and cosmic rays during near surface exposure on the Moon. The track densities are related to the exposure conditions (depth and time). Distributions of the number of grains as a function of their track densities can reveal the modality of soil maturation. As part of a consortium effort to better understand the maturation of lunar soil and its relation to its infrared reflectance properties, we worked on lunar samples 67701,205 and 61221,134. These samples were etched for a shorter time (6 hours) than last summer's sample and this difference has presented problems for establishing the correct analysis conditions. We used computer counting and measurement of area to obtain preliminary track densities and a track density distribution that we could interpret for sample 67701,205. This sample is a submature soil consisting of approximately 85 percent mature soil mixed with approximately 15 percent immature, but not pristine, soil.

  4. Ultra-high resolution computed tomography imaging

    DOEpatents

    Paulus, Michael J.; Sari-Sarraf, Hamed; Tobin, Jr., Kenneth William; Gleason, Shaun S.; Thomas, Jr., Clarence E.

    2002-01-01

    A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

  5. Computer-Aided Diagnostic System For Mass Survey Chest Images

    NASA Astrophysics Data System (ADS)

    Yasuda, Yoshizumi; Kinoshita, Yasuhiro; Emori, Yasufumi; Yoshimura, Hitoshi

    1988-06-01

    In order to support screening of chest radiographs on mass survey, a computer-aided diagnostic system that automatically detects abnormality of candidate images using a digital image analysis technique has been developed. Extracting boundary lines of lung fields and examining their shapes allowed various kind of abnormalities to be detected. Correction and expansion were facilitated by describing the system control, image analysis control and judgement of abnormality in the rule type programing language. In the experiments using typical samples of student's radiograms, good results were obtained for the detection of abnormal shape of lung field, cardiac hypertrophy and scoliosis. As for the detection of diaphragmatic abnormality, relatively good results were obtained but further improvements will be necessary.

  6. Computational imaging with a single-pixel detector and a consumer video projector

    NASA Astrophysics Data System (ADS)

    Sych, D.; Aksenov, M.

    2018-02-01

    Single-pixel imaging is a novel rapidly developing imaging technique that employs spatially structured illumination and a single-pixel detector. In this work, we experimentally demonstrate a fully operating modular single-pixel imaging system. Light patterns in our setup are created with help of a computer-controlled digital micromirror device from a consumer video projector. We investigate how different working modes and settings of the projector affect the quality of reconstructed images. We develop several image reconstruction algorithms and compare their performance for real imaging. Also, we discuss the potential use of the single-pixel imaging system for quantum applications.

  7. Image Analysis via Soft Computing: Prototype Applications at NASA KSC and Product Commercialization

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steve

    2011-01-01

    This slide presentation reviews the use of "soft computing" which differs from "hard computing" in that it is more tolerant of imprecision, partial truth, uncertainty, and approximation and its use in image analysis. Soft computing provides flexible information processing to handle real life ambiguous situations and achieve tractability, robustness low solution cost, and a closer resemblance to human decision making. Several systems are or have been developed: Fuzzy Reasoning Edge Detection (FRED), Fuzzy Reasoning Adaptive Thresholding (FRAT), Image enhancement techniques, and visual/pattern recognition. These systems are compared with examples that show the effectiveness of each. NASA applications that are reviewed are: Real-Time (RT) Anomaly Detection, Real-Time (RT) Moving Debris Detection and the Columbia Investigation. The RT anomaly detection reviewed the case of a damaged cable for the emergency egress system. The use of these techniques is further illustrated in the Columbia investigation with the location and detection of Foam debris. There are several applications in commercial usage: image enhancement, human screening and privacy protection, visual inspection, 3D heart visualization, tumor detections and x ray image enhancement.

  8. A specialized plug-in software module for computer-aided quantitative measurement of medical images.

    PubMed

    Wang, Q; Zeng, Y J; Huo, P; Hu, J L; Zhang, J H

    2003-12-01

    This paper presents a specialized system for quantitative measurement of medical images. Using Visual C++, we developed a computer-aided software based on Image-Pro Plus (IPP), a software development platform. When transferred to the hard disk of a computer by an MVPCI-V3A frame grabber, medical images can be automatically processed by our own IPP plug-in for immunohistochemical analysis, cytomorphological measurement and blood vessel segmentation. In 34 clinical studies, the system has shown its high stability, reliability and ease of utility.

  9. [Accuracy of computer aided measurement for detecting dental proximal caries lesions in images of cone-beam computed tomography].

    PubMed

    Zhang, Z L; Li, J P; Li, G; Ma, X C

    2017-02-09

    Objective: To establish and validate a computer program used to aid the detection of dental proximal caries in the images cone beam computed tomography (CBCT) images. Methods: According to the characteristics of caries lesions in X-ray images, a computer aided detection program for proximal caries was established with Matlab and Visual C++. The whole process for caries lesion detection included image import and preprocessing, measuring average gray value of air area, choosing region of interest and calculating gray value, defining the caries areas. The program was used to examine 90 proximal surfaces from 45 extracted human teeth collected from Peking University School and Hospital of Stomatology. The teeth were then scanned with a CBCT scanner (Promax 3D). The proximal surfaces of the teeth were respectively detected by caries detection program and scored by human observer for the extent of lesions with 6-level-scale. With histologic examination serving as the reference standard, the caries detection program and the human observer performances were assessed with receiver operating characteristic (ROC) curves. Student t -test was used to analyze the areas under the ROC curves (AUC) for the differences between caries detection program and human observer. Spearman correlation coefficient was used to analyze the detection accuracy of caries depth. Results: For the diagnosis of proximal caries in CBCT images, the AUC values of human observers and caries detection program were 0.632 and 0.703, respectively. There was a statistically significant difference between the AUC values ( P= 0.023). The correlation between program performance and gold standard (correlation coefficient r (s)=0.525) was higher than that of observer performance and gold standard ( r (s)=0.457) and there was a statistically significant difference between the correlation coefficients ( P= 0.000). Conclusions: The program that automatically detects dental proximal caries lesions could improve the

  10. Initial results of finger imaging using photoacoustic computed tomography

    NASA Astrophysics Data System (ADS)

    van Es, Peter; Biswas, Samir K.; Moens, Hein J. Bernelot; Steenbergen, Wiendelt; Manohar, Srirang

    2014-06-01

    We present a photoacoustic computed tomography investigation on a healthy human finger, to image blood vessels with a focus on vascularity across the interphalangeal joints. The cross-sectional images were acquired using an imager specifically developed for this purpose. The images show rich detail of the digital blood vessels with diameters between 100 ÎĽm and 1.5 mm in various orientations and at various depths. Different vascular layers in the skin including the subpapillary plexus could also be visualized. Acoustic reflections on the finger bone of photoacoustic signals from skin were visible in sequential slice images along the finger except at the location of the joint gaps. Not unexpectedly, the healthy synovial membrane at the joint gaps was not detected due to its small size and normal vascularization. Future research will concentrate on studying digits afflicted with rheumatoid arthritis to detect the inflamed synovium with its heightened vascularization, whose characteristics are potential markers for disease activity.

  11. Experimental investigation of the persuasive impact of computer generated presentation graphics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogel, D.R.

    1986-01-01

    Computer generated presentation graphics are increasingly becoming a tool to aid management in communicating information and to cause an audience to accept a point of view or take action. Unfortunately, technological capability significantly exceeds current levels of user understanding and effective application. This research examines experimentally one aspect of this problem, the persuasive impact of characteristics of computer generated presentation graphics. The research was founded in theory based on the message learning approach to persuasion. Characteristics examined were color versus black and white, text versus image enhancement, and overhead transparencies versus 35 mm slides. Treatments were presented in association withmore » a videotaped presentation intended to persuade subjects to invest time and money in a set of time management seminars. Data were collected using pre-measure, post measure, and post measure follow up questionnaires. Presentation support had a direct impact on perceptions of the presenter as well as components of persuasion, i.e., attention, comprehension, yielding, and retention. Further, a strong positive relationship existed between enhanced perceptions of the presenter and attention and yielding.« less

  12. A computational imaging target specific detectivity metric

    NASA Astrophysics Data System (ADS)

    Preece, Bradley L.; Nehmetallah, George

    2017-05-01

    Due to the large quantity of low-cost, high-speed computational processing available today, computational imaging (CI) systems are expected to have a major role for next generation multifunctional cameras. The purpose of this work is to quantify the performance of theses CI systems in a standardized manner. Due to the diversity of CI system designs that are available today or proposed in the near future, significant challenges in modeling and calculating a standardized detection signal-to-noise ratio (SNR) to measure the performance of these systems. In this paper, we developed a path forward for a standardized detectivity metric for CI systems. The detectivity metric is designed to evaluate the performance of a CI system searching for a specific known target or signal of interest, and is defined as the optimal linear matched filter SNR, similar to the Hotelling SNR, calculated in computational space with special considerations for standardization. Therefore, the detectivity metric is designed to be flexible, in order to handle various types of CI systems and specific targets, while keeping the complexity and assumptions of the systems to a minimum.

  13. Investigating line- versus point-laser excitation for three-dimensional fluorescence imaging and tomography employing a trimodal imaging system

    NASA Astrophysics Data System (ADS)

    Cao, Liji; Peter, Jörg

    2013-06-01

    The adoption of axially oriented line illumination patterns for fluorescence excitation in small animals for fluorescence surface imaging (FSI) and fluorescence optical tomography (FOT) is being investigated. A trimodal single-photon-emission-computed-tomography/computed-tomography/optical-tomography (SPECT-CT-OT) small animal imaging system is being modified for employment of point- and line-laser excitation sources. These sources can be arbitrarily positioned around the imaged object. The line source is set to illuminate the object along its entire axial direction. Comparative evaluation of point and line illumination patterns for FSI and FOT is provided involving phantom as well as mouse data. Given the trimodal setup, CT data are used to guide the optical approaches by providing boundary information. Furthermore, FOT results are also being compared to SPECT. Results show that line-laser illumination yields a larger axial field of view (FOV) in FSI mode, hence faster data acquisition, and practically acceptable FOT reconstruction throughout the whole animal. Also, superimposed SPECT and FOT data provide additional information on similarities as well as differences in the distribution and uptake of both probe types. Fused CT data enhance further the anatomical localization of the tracer distribution in vivo. The feasibility of line-laser excitation for three-dimensional fluorescence imaging and tomography is demonstrated for initiating further research, however, not with the intention to replace one by the other.

  14. Software Method for Computed Tomography Cylinder Data Unwrapping, Re-slicing, and Analysis

    NASA Technical Reports Server (NTRS)

    Roth, Don J.

    2013-01-01

    A software method has been developed that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography (CT). This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2D sheets (or flattened onion skins ) in addition to a series of top view slices and 3D volume rendering. The advantages of viewing the data in this fashion are as follows: (1) the use of standard and specialized image processing and analysis methods is facilitated having 2D array data versus a volume rendering; (2) accurate lateral dimensional analysis of flaws is possible in the unwrapped sheets versus volume rendering; (3) flaws in the part jump out at the inspector with the proper contrast expansion settings in the unwrapped sheets; and (4) it is much easier for the inspector to locate flaws in the unwrapped sheets versus top view slices for very thin cylinders. The method is fully automated and requires no input from the user except proper voxel dimension from the CT experiment and wall thickness of the part. The software is available in 32-bit and 64-bit versions, and can be used with binary data (8- and 16-bit) and BMP type CT image sets. The software has memory (RAM) and hard-drive based modes. The advantage of the (64-bit) RAM-based mode is speed (and is very practical for users of 64-bit Windows operating systems and computers having 16 GB or more RAM). The advantage of the hard-drive based analysis is one can work with essentially unlimited-sized data sets. Separate windows are spawned for the unwrapped/re-sliced data view and any image processing interactive capability. Individual unwrapped images and un -wrapped image series can be saved in common image formats. More information is available at http://www.grc.nasa.gov/WWW/OptInstr/ NDE_CT_CylinderUnwrapper.html.

  15. A Medical Image Backup Architecture Based on a NoSQL Database and Cloud Computing Services.

    PubMed

    Santos Simões de Almeida, Luan Henrique; Costa Oliveira, Marcelo

    2015-01-01

    The use of digital systems for storing medical images generates a huge volume of data. Digital images are commonly stored and managed on a Picture Archiving and Communication System (PACS), under the DICOM standard. However, PACS is limited because it is strongly dependent on the server's physical space. Alternatively, Cloud Computing arises as an extensive, low cost, and reconfigurable resource. However, medical images contain patient information that can not be made available in a public cloud. Therefore, a mechanism to anonymize these images is needed. This poster presents a solution for this issue by taking digital images from PACS, converting the information contained in each image file to a NoSQL database, and using cloud computing to store digital images.

  16. A review of automated image understanding within 3D baggage computed tomography security screening.

    PubMed

    Mouton, Andre; Breckon, Toby P

    2015-01-01

    Baggage inspection is the principal safeguard against the transportation of prohibited and potentially dangerous materials at airport security checkpoints. Although traditionally performed by 2D X-ray based scanning, increasingly stringent security regulations have led to a growing demand for more advanced imaging technologies. The role of X-ray Computed Tomography is thus rapidly expanding beyond the traditional materials-based detection of explosives. The development of computer vision and image processing techniques for the automated understanding of 3D baggage-CT imagery is however, complicated by poor image resolutions, image clutter and high levels of noise and artefacts. We discuss the recent and most pertinent advancements and identify topics for future research within the challenging domain of automated image understanding for baggage security screening CT.

  17. SU-F-I-43: A Software-Based Statistical Method to Compute Low Contrast Detectability in Computed Tomography Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacko, M; Aldoohan, S

    Purpose: The low contrast detectability (LCD) of a CT scanner is its ability to detect and display faint lesions. The current approach to quantify LCD is achieved using vendor-specific methods and phantoms, typically by subjectively observing the smallest size object at a contrast level above phantom background. However, this approach does not yield clinically applicable values for LCD. The current study proposes a statistical LCD metric using software tools to not only to assess scanner performance, but also to quantify the key factors affecting LCD. This approach was developed using uniform QC phantoms, and its applicability was then extended undermore » simulated clinical conditions. Methods: MATLAB software was developed to compute LCD using a uniform image of a QC phantom. For a given virtual object size, the software randomly samples the image within a selected area, and uses statistical analysis based on Student’s t-distribution to compute the LCD as the minimal Hounsfield Unit’s that can be distinguished from the background at the 95% confidence level. Its validity was assessed by comparison with the behavior of a known QC phantom under various scan protocols and a tissue-mimicking phantom. The contributions of beam quality and scattered radiation upon the computed LCD were quantified by using various external beam-hardening filters and phantom lengths. Results: As expected, the LCD was inversely related to object size under all scan conditions. The type of image reconstruction kernel filter and tissue/organ type strongly influenced the background noise characteristics and therefore, the computed LCD for the associated image. Conclusion: The proposed metric and its associated software tools are vendor-independent and can be used to analyze any LCD scanner performance. Furthermore, the method employed can be used in conjunction with the relationships established in this study between LCD and tissue type to extend these concepts to patients’ clinical

  18. A new strategic neurosurgical planning tool for brainstem cavernous malformations using interactive computer graphics with multimodal fusion images.

    PubMed

    Kin, Taichi; Nakatomi, Hirofumi; Shojima, Masaaki; Tanaka, Minoru; Ino, Kenji; Mori, Harushi; Kunimatsu, Akira; Oyama, Hiroshi; Saito, Nobuhito

    2012-07-01

    In this study, the authors used preoperative simulation employing 3D computer graphics (interactive computer graphics) to fuse all imaging data for brainstem cavernous malformations. The authors evaluated whether interactive computer graphics or 2D imaging correlated better with the actual operative field, particularly in identifying a developmental venous anomaly (DVA). The study population consisted of 10 patients scheduled for surgical treatment of brainstem cavernous malformations. Data from preoperative imaging (MRI, CT, and 3D rotational angiography) were automatically fused using a normalized mutual information method, and then reconstructed by a hybrid method combining surface rendering and volume rendering methods. With surface rendering, multimodality and multithreshold techniques for 1 tissue were applied. The completed interactive computer graphics were used for simulation of surgical approaches and assumed surgical fields. Preoperative diagnostic rates for a DVA associated with brainstem cavernous malformation were compared between conventional 2D imaging and interactive computer graphics employing receiver operating characteristic (ROC) analysis. The time required for reconstruction of 3D images was 3-6 hours for interactive computer graphics. Observation in interactive mode required approximately 15 minutes. Detailed anatomical information for operative procedures, from the craniotomy to microsurgical operations, could be visualized and simulated three-dimensionally as 1 computer graphic using interactive computer graphics. Virtual surgical views were consistent with actual operative views. This technique was very useful for examining various surgical approaches. Mean (±SEM) area under the ROC curve for rate of DVA diagnosis was significantly better for interactive computer graphics (1.000±0.000) than for 2D imaging (0.766±0.091; p<0.001, Mann-Whitney U-test). The authors report a new method for automatic registration of preoperative imaging data

  19. Image Quality and Radiation Dose for Prospectively Triggered Coronary CT Angiography: 128-Slice Single-Source CT versus First-Generation 64-Slice Dual-Source CT

    NASA Astrophysics Data System (ADS)

    Gu, Jin; Shi, He-Shui; Han, Ping; Yu, Jie; Ma, Gui-Na; Wu, Sheng

    2016-10-01

    This study sought to compare the image quality and radiation dose of coronary computed tomography angiography (CCTA) from prospectively triggered 128-slice CT (128-MSCT) versus dual-source 64-slice CT (DSCT). The study was approved by the Medical Ethics Committee at Tongji Medical College of Huazhong University of Science and Technology. Eighty consecutive patients with stable heart rates lower than 70 bpm were enrolled. Forty patients were scanned with 128-MSCT, and the other 40 patients were scanned with DSCT. Two radiologists independently assessed the image quality in segments (diameter >1 mm) according to a three-point scale (1: excellent; 2: moderate; 3: insufficient). The CCTA radiation dose was calculated. Eighty patients with 526 segments in the 128-MSCT group and 544 segments in the DSCT group were evaluated. The image quality 1, 2 and 3 scores were 91.6%, 6.9% and 1.5%, respectively, for the 128-MSCT group and 97.6%, 1.7% and 0.7%, respectively, for the DSCT group, and there was a statistically significant inter-group difference (P ≤ 0.001). The effective doses were 3.0 mSv in the 128-MSCT group and 4.5 mSv in the DSCT group (P ≤ 0.001). Compared with DSCT, CCTA with prospectively triggered 128-MSCT had adequate image quality and a 33.3% lower radiation dose.

  20. Children's accuracy of portion size estimation using digital food images: effects of interface design and size of image on computer screen.

    PubMed

    Baranowski, Tom; Baranowski, Janice C; Watson, Kathleen B; Martin, Shelby; Beltran, Alicia; Islam, Noemi; Dadabhoy, Hafza; Adame, Su-heyla; Cullen, Karen; Thompson, Debbe; Buday, Richard; Subar, Amy

    2011-03-01

    To test the effect of image size and presence of size cues on the accuracy of portion size estimation by children. Children were randomly assigned to seeing images with or without food size cues (utensils and checked tablecloth) and were presented with sixteen food models (foods commonly eaten by children) in varying portion sizes, one at a time. They estimated each food model's portion size by selecting a digital food image. The same food images were presented in two ways: (i) as small, graduated portion size images all on one screen or (ii) by scrolling across large, graduated portion size images, one per sequential screen. Laboratory-based with computer and food models. Volunteer multi-ethnic sample of 120 children, equally distributed by gender and ages (8 to 13 years) in 2008-2009. Average percentage of correctly classified foods was 60·3 %. There were no differences in accuracy by any design factor or demographic characteristic. Multiple small pictures on the screen at once took half the time to estimate portion size compared with scrolling through large pictures. Larger pictures had more overestimation of size. Multiple images of successively larger portion sizes of a food on one computer screen facilitated quicker portion size responses with no decrease in accuracy. This is the method of choice for portion size estimation on a computer.

  1. Automated breast segmentation in ultrasound computer tomography SAFT images

    NASA Astrophysics Data System (ADS)

    Hopp, T.; You, W.; Zapf, M.; Tan, W. Y.; Gemmeke, H.; Ruiter, N. V.

    2017-03-01

    Ultrasound Computer Tomography (USCT) is a promising new imaging system for breast cancer diagnosis. An essential step before further processing is to remove the water background from the reconstructed images. In this paper we present a fully-automated image segmentation method based on three-dimensional active contours. The active contour method is extended by applying gradient vector flow and encoding the USCT aperture characteristics as additional weighting terms. A surface detection algorithm based on a ray model is developed to initialize the active contour, which is iteratively deformed to capture the breast outline in USCT reflection images. The evaluation with synthetic data showed that the method is able to cope with noisy images, and is not influenced by the position of the breast and the presence of scattering objects within the breast. The proposed method was applied to 14 in-vivo images resulting in an average surface deviation from a manual segmentation of 2.7 mm. We conclude that automated segmentation of USCT reflection images is feasible and produces results comparable to a manual segmentation. By applying the proposed method, reproducible segmentation results can be obtained without manual interaction by an expert.

  2. Screen-detected versus interval cancers: Effect of imaging modality and breast density in the Flemish Breast Cancer Screening Programme.

    PubMed

    Timmermans, Lore; Bleyen, Luc; Bacher, Klaus; Van Herck, Koen; Lemmens, Kim; Van Ongeval, Chantal; Van Steen, Andre; Martens, Patrick; De Brabander, Isabel; Goossens, Mathieu; Thierens, Hubert

    2017-09-01

    To investigate if direct radiography (DR) performs better than screen-film mammography (SF) and computed radiography (CR) in dense breasts in a decentralized organised Breast Cancer Screening Programme. To this end, screen-detected versus interval cancers were studied in different BI-RADS density classes for these imaging modalities. The study cohort consisted of 351,532 women who participated in the Flemish Breast Cancer Screening Programme in 2009 and 2010. Information on screen-detected and interval cancers, breast density scores of radiologist second readers, and imaging modality was obtained by linkage of the databases of the Centre of Cancer Detection and the Belgian Cancer Registry. Overall, 67% of occurring breast cancers are screen detected and 33% are interval cancers, with DR performing better than SF and CR. The interval cancer rate increases gradually with breast density, regardless of modality. In the high-density class, the interval cancer rate exceeds the cancer detection rate for SF and CR, but not for DR. DR is superior to SF and CR with respect to cancer detection rates for high-density breasts. To reduce the high interval cancer rate in dense breasts, use of an additional imaging technique in screening can be taken into consideration. • Interval cancer rate increases gradually with breast density, regardless of modality. • Cancer detection rate in high-density breasts is superior in DR. • IC rate exceeds CDR for SF and CR in high-density breasts. • DR performs better in high-density breasts for third readings and false-positives.

  3. High performance computing environment for multidimensional image analysis

    PubMed Central

    Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo

    2007-01-01

    Background The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. Results We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478Ă— speedup. Conclusion Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets. PMID:17634099

  4. High performance computing environment for multidimensional image analysis.

    PubMed

    Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo

    2007-07-10

    The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.

  5. Resolution versus speckle relative to geologic interpretability of spaceborne radar images - A survey of user preference

    NASA Technical Reports Server (NTRS)

    Ford, J. P.

    1982-01-01

    A survey conducted to evaluate user preference for resolution versus speckle relative to the geologic interpretability of spaceborne radar images is discussed. Thirteen different resolution/looks combinations are simulated from Seasat synthetic-aperture radar data of each of three test sites. The SAR images were distributed with questionnaires for analysis to 85 earth scientists. The relative discriminability of geologic targets at each test site for each simulation of resolution and speckle on the images is determined on the basis of a survey of the evaluations. A large majority of the analysts respond that for most targets a two-look image at the highest simulated resolution is best. For a constant data rate, a higher resolution is more important for target discrimination than a higher number of looks. It is noted that sand dunes require more looks than other geologic targets. At all resolutions, multiple-look images are preferred over the corresponding single-look image. In general, the number of multiple looks that is optimal for discriminating geologic targets is inversely related to the simulated resolution.

  6. Efficient scatter model for simulation of ultrasound images from computed tomography data

    NASA Astrophysics Data System (ADS)

    D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.

    2015-12-01

    Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.

  7. A self-synchronized high speed computational ghost imaging system: A leap towards dynamic capturing

    NASA Astrophysics Data System (ADS)

    Suo, Jinli; Bian, Liheng; Xiao, Yudong; Wang, Yongjin; Zhang, Lei; Dai, Qionghai

    2015-11-01

    High quality computational ghost imaging needs to acquire a large number of correlated measurements between the to-be-imaged scene and different reference patterns, thus ultra-high speed data acquisition is of crucial importance in real applications. To raise the acquisition efficiency, this paper reports a high speed computational ghost imaging system using a 20 kHz spatial light modulator together with a 2 MHz photodiode. Technically, the synchronization between such high frequency illumination and bucket detector needs nanosecond trigger precision, so the development of synchronization module is quite challenging. To handle this problem, we propose a simple and effective computational self-synchronization scheme by building a general mathematical model and introducing a high precision synchronization technique. The resulted efficiency is around 14 times faster than state-of-the-arts, and takes an important step towards ghost imaging of dynamic scenes. Besides, the proposed scheme is a general approach with high flexibility for readily incorporating other illuminators and detectors.

  8. Management of Liver Cancer Argon-helium Knife Therapy with Functional Computer Tomography Perfusion Imaging.

    PubMed

    Wang, Hongbo; Shu, Shengjie; Li, Jinping; Jiang, Huijie

    2016-02-01

    The objective of this study was to observe the change in blood perfusion of liver cancer following argon-helium knife treatment with functional computer tomography perfusion imaging. Twenty-seven patients with primary liver cancer treated with argon-helium knife and were included in this study. Plain computer tomography (CT) and computer tomography perfusion (CTP) imaging were conducted in all patients before and after treatment. Perfusion parameters including blood flows, blood volume, hepatic artery perfusion fraction, hepatic artery perfusion, and hepatic portal venous perfusion were used for evaluating therapeutic effect. All parameters in liver cancer were significantly decreased after argon-helium knife treatment (p < 0.05 to all). Significant decrease in hepatic artery perfusion was also observed in pericancerous liver tissue, but other parameters kept constant. CT perfusion imaging is able to detect decrease in blood perfusion of liver cancer post-argon-helium knife therapy. Therefore, CTP imaging would play an important role for liver cancer management followed argon-helium knife therapy. © The Author(s) 2014.

  9. A Comparison of Accuracy of Image- versus Hardware-based Tracking Technologies in 3D Fusion in Aortic Endografting.

    PubMed

    Rolls, A E; Maurel, B; Davis, M; Constantinou, J; Hamilton, G; Mastracci, T M

    2016-09-01

    Fusion of three-dimensional (3D) computed tomography and intraoperative two-dimensional imaging in endovascular surgery relies on manual rigid co-registration of bony landmarks and tracking of hardware to provide a 3D overlay (hardware-based tracking, HWT). An alternative technique (image-based tracking, IMT) uses image recognition to register and place the fusion mask. We present preliminary experience with an agnostic fusion technology that uses IMT, with the aim of comparing the accuracy of overlay for this technology with HWT. Data were collected prospectively for 12 patients. All devices were deployed using both IMT and HWT fusion assistance concurrently. Postoperative analysis of both systems was performed by three blinded expert observers, from selected time-points during the procedures, using the displacement of fusion rings, the overlay of vascular markings and the true ostia of renal arteries. The Mean overlay error and the deviation from mean error was derived using image analysis software. Comparison of the mean overlay error was made between IMT and HWT. The validity of the point-picking technique was assessed. IMT was successful in all of the first 12 cases, whereas technical learning curve challenges thwarted HWT in four cases. When independent operators assessed the degree of accuracy of the overlay, the median error for IMT was 3.9 mm (IQR 2.89-6.24, max 9.5) versus 8.64 mm (IQR 6.1-16.8, max 24.5) for HWT (p = .001). Variance per observer was 0.69 mm(2) and 95% limit of agreement ±1.63. In this preliminary study, the error of magnitude of displacement from the "true anatomy" during image overlay in IMT was less than for HWT. This confirms that ongoing manual re-registration, as recommended by the manufacturer, should be performed for HWT systems to maintain accuracy. The error in position of the fusion markers for IMT was consistent, thus may be considered predictable. Copyright © 2016 European Society for Vascular Surgery. Published by

  10. Computer assessment of atherosclerosis from angiographic images

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Blankenhorn, D. H.; Brooks, S. H.; Crawford, D. W.; Cashin, W. L.

    1982-01-01

    A computer method for detection and quantification of atherosclerosis from angiograms has been developed and used to measure lesion change in human clinical trials. The technique involves tracking the vessel edges and measuring individual lesions as well as the overall irregularity of the arterial image. Application of the technique to conventional arterial-injection femoral and coronary angiograms is outlined and an experimental study to extend the technique to analysis of intravenous angiograms of the carotid and cornary arteries is described.

  11. Use of Noncontrast Computed Tomography and Computed Tomographic Perfusion in Predicting Intracerebral Hemorrhage After Intravenous Alteplase Therapy.

    PubMed

    Batchelor, Connor; Pordeli, Pooneh; d'Esterre, Christopher D; Najm, Mohamed; Al-Ajlan, Fahad S; Boesen, Mari E; McDougall, Connor; Hur, Lisa; Fainardi, Enrico; Shankar, Jai Jai Shiva; Rubiera, Marta; Khaw, Alexander V; Hill, Michael D; Demchuk, Andrew M; Sajobi, Tolulope T; Goyal, Mayank; Lee, Ting-Yim; Aviv, Richard I; Menon, Bijoy K

    2017-06-01

    Intracerebral hemorrhage is a feared complication of intravenous alteplase therapy in patients with acute ischemic stroke. We explore the use of multimodal computed tomography in predicting this complication. All patients were administered intravenous alteplase with/without intra-arterial therapy. An age- and sex-matched case-control design with classic and conditional logistic regression techniques was chosen for analyses. Outcome was parenchymal hemorrhage on 24- to 48-hour imaging. Exposure variables were imaging (noncontrast computed tomography hypoattenuation degree, relative volume of very low cerebral blood volume, relative volume of cerebral blood flow ≤7 mL/min·per 100 g, relative volume of T max ≥16 s with all volumes standardized to z axis coverage, mean permeability surface area product values within T max ≥8 s volume, and mean permeability surface area product values within ipsilesional hemisphere) and clinical variables (NIHSS [National Institutes of Health Stroke Scale], onset to imaging time, baseline systolic blood pressure, blood glucose, serum creatinine, treatment type, and reperfusion status). One-hundred eighteen subjects (22 patients with parenchymal hemorrhage versus 96 without, median baseline NIHSS score of 15) were included in the final analysis. In multivariable regression, noncontrast computed tomography hypoattenuation grade ( P <0.006) and computerized tomography perfusion white matter relative volume of very low cerebral blood volume ( P =0.04) were the only significant variables associated with parenchymal hemorrhage on follow-up imaging (area under the curve, 0.73; 95% confidence interval, 0.63-0.83). Interrater reliability for noncontrast computed tomography hypoattenuation grade was moderate (κ=0.6). Baseline hypoattenuation on noncontrast computed tomography and very low cerebral blood volume on computerized tomography perfusion are associated with development of parenchymal hemorrhage in patients with acute ischemic

  12. History of imaging in orthodontics from Broadbent to cone-beam computed tomography.

    PubMed

    Hans, Mark G; Palomo, J Martin; Valiathan, Manish

    2015-12-01

    The history of imaging and orthodontics is a story of technology informing biology. Advances in imaging changed our thinking as our understanding of craniofacial growth and the impact of orthodontic treatment deepened. This article traces the history of imaging in orthodontics from the invention of the cephalometer by B. Holly Broadbent in 1930 to the introduction of low-cost, low-radiation-dose cone-beam computed tomography imaging in 2015. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  13. Ultrasonic computed tomography imaging of iron oxide nanoparticles

    NASA Astrophysics Data System (ADS)

    Perlman, Or; Azhari, Haim

    2017-02-01

    Iron oxide nanoparticles (IONPs) are becoming increasingly used and intensively investigated in the field of medical imaging. They are currently FDA approved for magnetic resonance imaging (MRI), and it would be highly desirable to visualize them by ultrasound as well. Previous reports using the conventional ultrasound B-scan (pulse-echo) imaging technique have shown very limited detectability of these particles. The aim of this study is to explore the feasibility of imaging IONPs using the through-transmission ultrasound methodology and demonstrate their detectability using ultrasonic computed tomography (UCT). Commercially available IONPs were acoustically analysed to quantify their effect on the speed of sound (SOS) and acoustic attenuation as a function of concentration. Next, through-transmission projection and UCT imaging were performed on a breast mimicking phantom and on an ex vivo tissue model, to which IONPs were injected. Finally, an MRI scan was performed to verify that the same particles examined in the ultrasound experiment can be imaged by magnetic resonance, using the same clinically relevant concentrations. The results have shown a consistent concentration dependent speed of sound increase (1.86 \\text{m}{{\\text{s}}^{-1}} rise per 100 µg · ml-1 IONPs). Imaging based on this property has shown a substantial contrast-to-noise ratio improvement (up to 5 fold, p  <  0.01). The SOS-related effect generated a well discernible image contrast and allowed the detection of the particles existence and location, in both raster-scan projection and UCT imaging. Conversely, no significant change in the acoustic attenuation coefficient was noted. Based on these findings, it is concluded that IONPs can be used as an effective SOS-based contrast agent, potentially useful for ultrasonic breast imaging. Furthermore, the particle offers the capacity of significantly enhancing diagnosis accuracy using multimodal MRI-ultrasound imaging capabilities.

  14. Stress Computed Tomography Myocardial Perfusion Imaging: A New Topic in Cardiology.

    PubMed

    Seitun, Sara; Castiglione Morelli, Margherita; Budaj, Irilda; Boccalini, Sara; Galletto Pregliasco, Athena; Valbusa, Alberto; Cademartiri, Filippo; Ferro, Carlo

    2016-02-01

    Since its introduction about 15 years ago, coronary computed tomography angiography has become today the most accurate clinical instrument for noninvasive assessment of coronary atherosclerosis. Important technical developments have led to a continuous stream of new clinical applications together with a significant reduction in radiation dose exposure. Latest generation computed tomography scanners (≥ 64 slices) allow the possibility of performing static or dynamic perfusion imaging during stress by using coronary vasodilator agents (adenosine, dipyridamole, or regadenoson), combining both functional and anatomical information in the same examination. In this article, the emerging role and state-of-the-art of myocardial computed tomography perfusion imaging are reviewed and are illustrated by clinical cases from our experience with a second-generation dual-source 128-slice scanner (Somatom Definition Flash, Siemens; Erlangen, Germany). Technical aspects, data analysis, diagnostic accuracy, radiation dose and future prospects are reviewed. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  15. Bedside imaging of intracranial hemorrhage in the neonate using light: comparison with ultrasound, computed tomography, and magnetic resonance imaging.

    PubMed

    Hintz, S R; Cheong, W F; van Houten, J P; Stevenson, D K; Benaron, D A

    1999-01-01

    Medical optical imaging (MOI) uses light emitted into opaque tissues to determine the interior structure. Previous reports detailed a portable time-of-flight and absorbance system emitting pulses of near infrared light into tissues and measuring the emerging light. Using this system, optical images of phantoms, whole rats, and pathologic neonatal brain specimens have been tomographically reconstructed. We have now modified the existing instrumentation into a clinically relevant headband-based system to be used for optical imaging of structure in the neonatal brain at the bedside. Eight medical optical imaging studies in the neonatal intensive care unit were performed in a blinded clinical comparison of optical images with ultrasound, computed tomography, and magnetic resonance imaging. Optical images were interpreted as correct in six of eight cases, with one error attributed to the age of the clot, and one small clot not seen. In addition, one disagreement with ultrasound, not reported as an error, was found to be the result of a mislabeled ultrasound report rather than because of an inaccurate optical scan. Optical scan correlated well with computed tomography and magnetic resonance imaging findings in one patient. We conclude that light-based imaging using a portable time-of-flight system is feasible and represents an important new noninvasive diagnostic technique, with potential for continuous monitoring of critically ill neonates at risk for intraventricular hemorrhage or stroke. Further studies are now underway to further investigate the functional imaging capabilities of this new diagnostic tool.

  16. Clinical brain MR imaging prescriptions in Talairach space: technologist- and computer-driven methods.

    PubMed

    Weiss, Kenneth L; Pan, Hai; Storrs, Judd; Strub, William; Weiss, Jane L; Jia, Li; Eldevik, O Petter

    2003-05-01

    Variability in patient head positioning may yield substantial interstudy image variance in the clinical setting. We describe and test three-step technologist and computer-automated algorithms designed to image the brain in a standard reference system and reduce variance. Triple oblique axial images obtained parallel to the Talairach anterior commissure (AC)-posterior commissure (PC) plane were reviewed in a prospective analysis of 126 consecutive patients. Requisite roll, yaw, and pitch correction, as three authors determined independently and subsequently by consensus, were compared with the technologists' actual graphical prescriptions and those generated by a novel computer automated three-step (CATS) program. Automated pitch determinations generated with Statistical Parametric Mapping '99 (SPM'99) were also compared. Requisite pitch correction (15.2 degrees +/- 10.2 degrees ) far exceeded that for roll (-0.6 degrees +/- 3.7 degrees ) and yaw (-0.9 degrees +/- 4.7 degrees ) in terms of magnitude and variance (P <.001). Technologist and computer-generated prescriptions substantially reduced interpatient image variance with regard to roll (3.4 degrees and 3.9 degrees vs 13.5 degrees ), yaw (0.6 degrees and 2.5 degrees vs 22.3 degrees ), and pitch (28.6 degrees, 18.5 degrees with CATS, and 59.3 degrees with SPM'99 vs 104 degrees ). CATS performed worse than the technologists in yaw prescription, and it was equivalent in roll and pitch prescriptions. Talairach prescriptions better approximated standard CT canthomeatal angulations (9 degrees vs 24 degrees ) and provided more efficient brain coverage than that of routine axial imaging. Brain MR prescriptions corrected for direct roll, yaw, and Talairach AC-PC pitch can be readily achieved by trained technologists or automated computer algorithms. This ability will substantially reduce interpatient variance, allow better approximation of standard CT angulation, and yield more efficient brain coverage than that of

  17. Single-Photon Emission Computed Tomography/Computed Tomography Imaging in a Rabbit Model of Emphysema Reveals Ongoing Apoptosis In Vivo

    PubMed Central

    Goldklang, Monica P.; Tekabe, Yared; Zelonina, Tina; Trischler, Jordis; Xiao, Rui; Stearns, Kyle; Romanov, Alexander; Muzio, Valeria; Shiomi, Takayuki; Johnson, Lynne L.

    2016-01-01

    Evaluation of lung disease is limited by the inability to visualize ongoing pathological processes. Molecular imaging that targets cellular processes related to disease pathogenesis has the potential to assess disease activity over time to allow intervention before lung destruction. Because apoptosis is a critical component of lung damage in emphysema, a functional imaging approach was taken to determine if targeting apoptosis in a smoke exposure model would allow the quantification of early lung damage in vivo. Rabbits were exposed to cigarette smoke for 4 or 16 weeks and underwent single-photon emission computed tomography/computed tomography scanning using technetium-99m–rhAnnexin V-128. Imaging results were correlated with ex vivo tissue analysis to validate the presence of lung destruction and apoptosis. Lung computed tomography scans of long-term smoke–exposed rabbits exhibit anatomical similarities to human emphysema, with increased lung volumes compared with controls. Morphometry on lung tissue confirmed increased mean linear intercept and destructive index at 16 weeks of smoke exposure and compliance measurements documented physiological changes of emphysema. Tissue and lavage analysis displayed the hallmarks of smoke exposure, including increased tissue cellularity and protease activity. Technetium-99m–rhAnnexin V-128 single-photon emission computed tomography signal was increased after smoke exposure at 4 and 16 weeks, with confirmation of increased apoptosis through terminal deoxynucleotidyl transferase dUTP nick end labeling staining and increased tissue neutral sphingomyelinase activity in the tissue. These studies not only describe a novel emphysema model for use with future therapeutic applications, but, most importantly, also characterize a promising imaging modality that identifies ongoing destructive cellular processes within the lung. PMID:27483341

  18. Evaluating the diagnostic sensitivity of computed diffusion-weighted MR imaging in the detection of breast cancer.

    PubMed

    O'Flynn, Elizabeth A M; Blackledge, Matthew; Collins, David; Downey, Katherine; Doran, Simon; Patel, Hardik; Dumonteil, Sam; Mok, Wing; Leach, Martin O; Koh, Dow-Mu

    2016-07-01

    To evaluate the diagnostic sensitivity of computed diffusion-weighted (DW)-MR imaging for the detection of breast cancer. Local research ethics approval was obtained. A total of 61 women (median 48 years) underwent dynamic contrast enhanced (DCE)- and DW-MR between January 2011 and March 2012, including 27 with breast cancer on core biopsy and 34 normal cases. Standard ADC maps using all four b values (0, 350, 700, 1150) were used to generate computed DW-MR images at b = 1500 s/mm(2) and b = 2000 s/mm(2) . Four image sets were read sequentially by two readers: acquired b = 1150 s/mm(2) , computed b = 1500 s/mm(2) and b = 2000 s/mm(2) , and DCE-MR at an early time point. Cancer detection was rated using a five-point scale; image quality and background suppression were rated using a four-point scale. The diagnostic sensitivity for breast cancer detection was compared using the McNemar test and inter-reader agreement with a Kappa value. Computed DW-MR resulted in higher overall diagnostic sensitivity with b = 2000 s/mm(2) having a mean diagnostic sensitivity of 76% (range 49.8-93.7%) and b = 1500 s/mm(2) having a mean diagnostic sensitivity of 70.3% (range 32-97.7%) compared with 44.4% (range 25.5-64.7%) for acquired b = 1150 s/mm(2) (both p = 0.0001). Computed DW-MR images produced better image quality and background suppression (mean scores for both readers: 2.55 and 2.9 for b 1500 s/mm(2) ; 2.55 and 3.15 for b 2000 s/mm(2) , respectively) than the acquired b value 1150 s/mm(2) images (mean scores for both readers: 2.4 and 2.45, respectively). Computed DW-MR imaging has the potential to improve the diagnostic sensitivity of breast cancer detection compared to acquired DW-MR. J. Magn. Reson. Imaging 2016;44:130-137. © 2016 Wiley Periodicals, Inc.

  19. The real deal: Willingness-to-pay and satiety expectations are greater for real foods versus their images.

    PubMed

    Romero, Carissa A; Compton, Michael T; Yang, Yueran; Snow, Jacqueline C

    2017-11-23

    Laboratory studies of human dietary choice have relied on computerized two-dimensional (2D) images as stimuli, whereas in everyday life, consumers make decisions in the context of real foods that have actual caloric content and afford grasping and consumption. Surprisingly, few studies have compared whether real foods are valued more than 2D images of foods, and in the studies that have, differences in the stimuli and testing conditions could have resulted in inflated bids for the real foods. Moreover, although the caloric content of food images has been shown to influence valuation, no studies to date have investigated whether 'real food exposure effects' on valuation reflect greater sensitivity to the caloric content of real foods versus images. Here, we compared willingness-to-pay (WTP) for, and expectations about satiety after consuming, everyday snack foods that were displayed as real foods versus 2D images. Critically, our 2D images were matched closely to the real foods for size, background, illumination, and apparent distance, and trial presentation and stimulus timing were identical across conditions. We used linear mixed effects modeling to determine whether effects of display format were modulated by food preference and the caloric content of the foods. Compared to food images, observers were willing to pay 6.62% more for (Experiment 1) and believed that they would feel more satiated after consuming (Experiment 2), foods displayed as real objects. Moreover, these effects appeared to be consistent across food preference, caloric content, as well as observers' estimates of the caloric content of the foods. Together, our results confirm that consumers' perception and valuation of everyday foods is influenced by the format in which they are displayed. Our findings raise important new insights into the factors that shape dietary choice in real-world contexts and highlight potential avenues for improving public health approaches to diet and obesity. Copyright

  20. A Set of Image Processing Algorithms for Computer-Aided Diagnosis in Nuclear Medicine Whole Body Bone Scan Images

    NASA Astrophysics Data System (ADS)

    Huang, Jia-Yann; Kao, Pan-Fu; Chen, Yung-Sheng

    2007-06-01

    Adjustment of brightness and contrast in nuclear medicine whole body bone scan images may confuse nuclear medicine physicians when identifying small bone lesions as well as making the identification of subtle bone lesion changes in sequential studies difficult. In this study, we developed a computer-aided diagnosis system, based on the fuzzy sets histogram thresholding method and anatomical knowledge-based image segmentation method that was able to analyze and quantify raw image data and identify the possible location of a lesion. To locate anatomical reference points, the fuzzy sets histogram thresholding method was adopted as a first processing stage to suppress the soft tissue in the bone images. Anatomical knowledge-based image segmentation method was then applied to segment the skeletal frame into different regions of homogeneous bones. For the different segmented bone regions, the lesion thresholds were set at different cut-offs. To obtain lesion thresholds in different segmented regions, the ranges and standard deviations of the image's gray-level distribution were obtained from 100 normal patients' whole body bone images and then, another 62 patients' images were used for testing. The two groups of images were independent. The sensitivity and the mean number of false lesions detected were used as performance indices to evaluate the proposed system. The overall sensitivity of the system is 92.1% (222 of 241) and 7.58 false detections per patient scan image. With a high sensitivity and an acceptable false lesions detection rate, this computer-aided automatic lesion detection system is demonstrated as useful and will probably in the future be able to help nuclear medicine physicians to identify possible bone lesions.

  1. A Method for Identifying Contours in Processing Digital Images from Computer Tomograph

    NASA Astrophysics Data System (ADS)

    RoĹźu, Ĺžerban; Pater, Flavius; Costea, Dan; Munteanu, Mihnea; RoĹźu, Doina; Fratila, Mihaela

    2011-09-01

    The first step in digital processing of two-dimensional computed tomography images is to identify the contour of component elements. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating new algorithms and methods in medical 2D and 3D imagery.

  2. Computer image processing - The Viking experience. [digital enhancement techniques

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1977-01-01

    Computer processing of digital imagery from the Viking mission to Mars is discussed, with attention given to subjective enhancement and quantitative processing. Contrast stretching and high-pass filtering techniques of subjective enhancement are described; algorithms developed to determine optimal stretch and filtering parameters are also mentioned. In addition, geometric transformations to rectify the distortion of shapes in the field of view and to alter the apparent viewpoint of the image are considered. Perhaps the most difficult problem in quantitative processing of Viking imagery was the production of accurate color representations of Orbiter and Lander camera images.

  3. Many-core computing for space-based stereoscopic imaging

    NASA Astrophysics Data System (ADS)

    McCall, Paul; Torres, Gildo; LeGrand, Keith; Adjouadi, Malek; Liu, Chen; Darling, Jacob; Pernicka, Henry

    The potential benefits of using parallel computing in real-time visual-based satellite proximity operations missions are investigated. Improvements in performance and relative navigation solutions over single thread systems can be achieved through multi- and many-core computing. Stochastic relative orbit determination methods benefit from the higher measurement frequencies, allowing them to more accurately determine the associated statistical properties of the relative orbital elements. More accurate orbit determination can lead to reduced fuel consumption and extended mission capabilities and duration. Inherent to the process of stereoscopic image processing is the difficulty of loading, managing, parsing, and evaluating large amounts of data efficiently, which may result in delays or highly time consuming processes for single (or few) processor systems or platforms. In this research we utilize the Single-Chip Cloud Computer (SCC), a fully programmable 48-core experimental processor, created by Intel Labs as a platform for many-core software research, provided with a high-speed on-chip network for sharing information along with advanced power management technologies and support for message-passing. The results from utilizing the SCC platform for the stereoscopic image processing application are presented in the form of Performance, Power, Energy, and Energy-Delay-Product (EDP) metrics. Also, a comparison between the SCC results and those obtained from executing the same application on a commercial PC are presented, showing the potential benefits of utilizing the SCC in particular, and any many-core platforms in general for real-time processing of visual-based satellite proximity operations missions.

  4. Imperceptible watermarking for security of fundus images in tele-ophthalmology applications and computer-aided diagnosis of retina diseases.

    PubMed

    Singh, Anushikha; Dutta, Malay Kishore

    2017-12-01

    The authentication and integrity verification of medical images is a critical and growing issue for patients in e-health services. Accurate identification of medical images and patient verification is an essential requirement to prevent error in medical diagnosis. The proposed work presents an imperceptible watermarking system to address the security issue of medical fundus images for tele-ophthalmology applications and computer aided automated diagnosis of retinal diseases. In the proposed work, patient identity is embedded in fundus image in singular value decomposition domain with adaptive quantization parameter to maintain perceptual transparency for variety of fundus images like healthy fundus or disease affected image. In the proposed method insertion of watermark in fundus image does not affect the automatic image processing diagnosis of retinal objects & pathologies which ensure uncompromised computer-based diagnosis associated with fundus image. Patient ID is correctly recovered from watermarked fundus image for integrity verification of fundus image at the diagnosis centre. The proposed watermarking system is tested in a comprehensive database of fundus images and results are convincing. results indicate that proposed watermarking method is imperceptible and it does not affect computer vision based automated diagnosis of retinal diseases. Correct recovery of patient ID from watermarked fundus image makes the proposed watermarking system applicable for authentication of fundus images for computer aided diagnosis and Tele-ophthalmology applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Prediction of Blood-Brain Barrier Disruption and Intracerebral Hemorrhagic Infarction Using Arterial Spin-Labeling Magnetic Resonance Imaging.

    PubMed

    Niibo, Takeya; Ohta, Hajime; Miyata, Shirou; Ikushima, Ichiro; Yonenaga, Kazuchika; Takeshima, Hideo

    2017-01-01

    Arterial spin-labeling magnetic resonance imaging is sensitive for detecting hyperemic lesions (HLs) in patients with acute ischemic stroke. We evaluated whether HLs could predict blood-brain barrier (BBB) disruption and hemorrhagic transformation (HT) in acute ischemic stroke patients. In a retrospective study, arterial spin-labeling was performed within 6 hours of symptom onset before revascularization treatment in 25 patients with anterior circulation large vessel occlusion on baseline magnetic resonance angiography. All patients underwent angiographic procedures intended for endovascular therapy and a noncontrast computed tomography scan immediately after treatment. BBB disruption was defined as a hyperdense lesion present on the posttreatment computed tomography scan. A subacute magnetic resonance imaging or computed tomography scan was performed during the subacute phase to assess HTs. The relationship between HLs and BBB disruption and HT was examined using the Alberta Stroke Program Early Computed Tomography Score locations in the symptomatic hemispheres. A HL was defined as a region where CBF relative ≥1.4 (CBF relative =CBF HL /CBF contralateral ). HLs, BBB disruption, and HT were found in 9, 15, and 15 patients, respectively. Compared with the patients without HLs, the patients with HLs had a higher incidence of both BBB disruption (100% versus 37.5%; P=0.003) and HT (100% versus 37.5%; P=0.003). Based on the Alberta Stroke Program Early Computed Tomography Score locations, 21 regions of interests displayed HLs. Compared with the regions of interests without HLs, the regions of interests with HLs had a higher incidence of both BBB disruption (42.8% versus 3.9%; P<0.001) and HT (85.7% versus 7.8%; P<0.001). HLs detected on pretreatment arterial spin-labeling maps may enable the prediction and localization of subsequent BBB disruption and HT. © 2016 American Heart Association, Inc.

  6. An Efficient Computational Framework for the Analysis of Whole Slide Images: Application to Follicular Lymphoma Immunohistochemistry

    PubMed Central

    Samsi, Siddharth; Krishnamurthy, Ashok K.; Gurcan, Metin N.

    2012-01-01

    Follicular Lymphoma (FL) is one of the most common non-Hodgkin Lymphoma in the United States. Diagnosis and grading of FL is based on the review of histopathological tissue sections under a microscope and is influenced by human factors such as fatigue and reader bias. Computer-aided image analysis tools can help improve the accuracy of diagnosis and grading and act as another tool at the pathologist’s disposal. Our group has been developing algorithms for identifying follicles in immunohistochemical images. These algorithms have been tested and validated on small images extracted from whole slide images. However, the use of these algorithms for analyzing the entire whole slide image requires significant changes to the processing methodology since the images are relatively large (on the order of 100k × 100k pixels). In this paper we discuss the challenges involved in analyzing whole slide images and propose potential computational methodologies for addressing these challenges. We discuss the use of parallel computing tools on commodity clusters and compare performance of the serial and parallel implementations of our approach. PMID:22962572

  7. A head movement image (HMI)-controlled computer mouse for people with disabilities.

    PubMed

    Chen, Yu-Luen; Chen, Weoi-Luen; Kuo, Te-Son; Lai, Jin-Shin

    2003-02-04

    This study proposes image processing and microprocessor technology for use in developing a head movement image (HMI)-controlled computer mouse system for the spinal cord injured (SCI). The system controls the movement and direction of the mouse cursor by capturing head movement images using a marker installed on the user's headset. In the clinical trial, this new mouse system was compared with an infrared-controlled mouse system on various tasks with nine subjects with SCI. The results were favourable to the new mouse system. The differences between the new mouse system and the infrared-controlled mouse were reaching statistical significance in each of the test situations (p<0.05). The HMI-controlled computer mouse improves the input speed. People with disabilities need only wear the headset and move their heads to freely control the movement of the mouse cursor.

  8. Hardware architecture design of image restoration based on time-frequency domain computation

    NASA Astrophysics Data System (ADS)

    Wen, Bo; Zhang, Jing; Jiao, Zipeng

    2013-10-01

    The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.

  9. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    NASA Astrophysics Data System (ADS)

    Stoitsis, John; Valavanis, Ioannis; Mougiakakou, Stavroula G.; Golemati, Spyretta; Nikita, Alexandra; Nikita, Konstantina S.

    2006-12-01

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis.

  10. Influence of the Pixel Sizes of Reference Computed Tomography on Single-photon Emission Computed Tomography Image Reconstruction Using Conjugate-gradient Algorithm.

    PubMed

    Okuda, Kyohei; Sakimoto, Shota; Fujii, Susumu; Ida, Tomonobu; Moriyama, Shigeru

    The frame-of-reference using computed-tomography (CT) coordinate system on single-photon emission computed tomography (SPECT) reconstruction is one of the advanced characteristics of the xSPECT reconstruction system. The aim of this study was to reveal the influence of the high-resolution frame-of-reference on the xSPECT reconstruction. 99m Tc line-source phantom and National Electrical Manufacturers Association (NEMA) image quality phantom were scanned using the SPECT/CT system. xSPECT reconstructions were performed with the reference CT images in different sizes of the display field-of-view (DFOV) and pixel. The pixel sizes of the reconstructed xSPECT images were close to 2.4 mm, which is acquired as originally projection data, even if the reference CT resolution was varied. The full width at half maximum (FWHM) of the line-source, absolute recovery coefficient, and background variability of image quality phantom were independent on the sizes of DFOV in the reference CT images. The results of this study revealed that the image quality of the reconstructed xSPECT images is not influenced by the resolution of frame-of-reference on SPECT reconstruction.

  11. Diagnosing acute pulmonary embolism with computed tomography: imaging update.

    PubMed

    Devaraj, Anand; Sayer, Charlie; Sheard, Sarah; Grubnic, Sisa; Nair, Arjun; Vlahos, Ioannis

    2015-05-01

    Acute pulmonary embolism is recognized as a difficult diagnosis to make. It is potentially fatal if undiagnosed, yet increasing referral rates for imaging and falling diagnostic yields are topics which have attracted much attention. For patients in the emergency department with suspected pulmonary embolism, computed tomography pulmonary angiography (CTPA) is the test of choice for most physicians, and hence radiology has a key role to play in the patient pathway. This review will outline key aspects of the recent literature regarding the following issues: patient selection for imaging, the optimization of CTPA image quality and dose, preferred pathways for pregnant patients and other subgroups, and the role of CTPA beyond diagnosis. The role of newer techniques such as dual-energy CT and single-photon emission-CT will also be discussed.

  12. Multiphase computer-generated holograms for full-color image generation

    NASA Astrophysics Data System (ADS)

    Choi, Kyong S.; Choi, Byong S.; Choi, Yoon S.; Kim, Sun I.; Kim, Jong Man; Kim, Nam; Gil, Sang K.

    2002-06-01

    Multi-phase and binary-phase computer-generated holograms were designed and demonstrated for full-color image generation. Optimize a phase profile of the hologram that achieves each color image, we employed a simulated annealing method. The design binary phase hologram had the diffraction efficiency of 33.23 percent and the reconstruction error of 0.367 X 10-2. And eight phase hologram had the diffraction efficiency of 67.92 percent and the reconstruction error of 0.273 X 10-2. The designed BPH was fabricated by micro photolithographic technique with a minimum pixel width of 5micrometers . And the it was reconstructed using by two Ar-ion lasers and a He-Ne laser. In addition, the color dispersion characteristic of the fabricate grating and scaling problem of the reconstructed image were discussed.

  13. Correlation of the clinical and physical image quality in chest radiography for average adults with a computed radiography imaging system.

    PubMed

    Moore, C S; Wood, T J; Beavis, A W; Saunderson, J R

    2013-07-01

    The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson's correlation coefficient. Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, p<0.033) and eDE (R=0.77, p<0.008) were observed. Medical physics experts may use the physical image quality metrics described here in quality assurance programmes and optimisation studies with a degree of confidence that they reflect the clinical image quality in chest CR images acquired without an antiscatter grid. A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography.

  14. Correlation of the clinical and physical image quality in chest radiography for average adults with a computed radiography imaging system

    PubMed Central

    Wood, T J; Beavis, A W; Saunderson, J R

    2013-01-01

    Objective: The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. Methods: The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson’s correlation coefficient. Results: Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, p<0.033) and eDE (R=0.77, p<0.008) were observed. Conclusion: Medical physics experts may use the physical image quality metrics described here in quality assurance programmes and optimisation studies with a degree of confidence that they reflect the clinical image quality in chest CR images acquired without an antiscatter grid. Advances in knowledge: A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography. PMID:23568362

  15. Can Axial-Based Nodal Size Criteria Be Used in Other Imaging Planes to Accurately Determine “Enlarged” Head and Neck Lymph Nodes?

    PubMed Central

    Bartlett, Eric S.; Walters, Thomas D.; Yu, Eugene

    2013-01-01

    Objective. We evaluate if axial-based lymph node size criteria can be applied to coronal and sagittal planes. Methods. Fifty pretreatment computed tomographic (CT) neck exams were evaluated in patients with head and neck squamous cell carcinoma (SCCa) and neck lymphadenopathy. Axial-based size criteria were applied to all 3 imaging planes, measured, and classified as “enlarged” if equal to or exceeding size criteria. Results. 222 lymph nodes were “enlarged” in one imaging plane; however, 53.2% (118/222) of these were “enlarged” in all 3 planes. Classification concordance between axial versus coronal/sagittal planes was poor (kappa = â’0.09 and â’0.07, resp., P < 0.05). The McNemar test showed systematic misclassification when comparing axial versus coronal (P < 0.001) and axial versus sagittal (P < 0.001) planes. Conclusion. Classification of “enlarged” lymph nodes differs between axial versus coronal/sagittal imaging planes when axial-based nodal size criteria are applied independently to all three imaging planes, and exclusively used without other morphologic nodal data. PMID:23984099

  16. Computed radiography imaging plates and associated methods of manufacture

    DOEpatents

    Henry, Nathaniel F.; Moses, Alex K.

    2015-08-18

    Computed radiography imaging plates incorporating an intensifying material that is coupled to or intermixed with the phosphor layer, allowing electrons and/or low energy x-rays to impart their energy on the phosphor layer, while decreasing internal scattering and increasing resolution. The radiation needed to perform radiography can also be reduced as a result.

  17. The Effects of Computer Animated Dissection versus Preserved Animal Dissection on the Student Achievement in a High School Biology Class.

    ERIC Educational Resources Information Center

    Kariuki, Patrick; Paulson, Ronda

    The purpose of this study was to examine the effectiveness of computer-animated dissection techniques versus the effectiveness of traditional dissection techniques as related to student achievement. The sample used was 104 general biology students from a small, rural high school in Northeast Tennessee. Random selection was used to separate the…

  18. Newspaper Coverage of the 1992 Presidential Campaign: A Content Analysis of Character/Competence/Image Issues versus Platform/Political Issues.

    ERIC Educational Resources Information Center

    Sims, Judy R.; Giordano, Joseph

    A research study assessed the amount of front page newspaper coverage allotted to "character/competence/image" issues versus "platform/political" issues in the 1992 presidential campaign. Using textual analysis, methodology of content analysis, researchers coded the front page of the following 5 newspapers between August 1 and…

  19. A COMPUTER MODEL OF LUNG MORPHOLOGY TO ANALYZE SPECT IMAGES

    EPA Science Inventory

    Measurement of the three-dimensional (3-D) spatial distribution of aerosol deposition can be performed using Single Photon Emission Computed Tomography (SPECT). The advantage of using 3-D techniques over planar gamma imaging is that deposition patterns can be related to real lun...

  20. Geological mapping potential of computer-enhanced images from the Shuttle Imaging Radar - Lisbon Valley Anticline, Utah

    NASA Technical Reports Server (NTRS)

    Curlis, J. D.; Frost, V. S.; Dellwig, L. F.

    1986-01-01

    Computer-enhancement techniques applied to the SIR-A data from the Lisbon Valley area in the northern portion of the Paradox basin increased the value of the imagery in the development of geologically useful maps. The enhancement techniques include filtering to remove image speckle from the SIR-A data and combining these data with Landsat multispectral scanner data. A method well-suited for the combination of the data sets utilized a three-dimensional domain defined by intensity-hue-saturation (IHS) coordinates. Such a system allows the Landsat data to modulate image intensity, while the SIR-A data control image hue and saturation. Whereas the addition of Landsat data to the SIR-A image by means of a pixel-by-pixel ratio accentuated textural variations within the image, the addition of color to the combined images enabled isolation of areas in which gray-tone contrast was minimal. This isolation resulted in a more precise definition of stratigraphic units.

  1. Clinical applications of cone beam computed tomography in endodontics: A comprehensive review.

    PubMed

    Cohenca, Nestor; Shemesh, Hagay

    2015-06-01

    Cone beam computed tomography (CBCT) is a new technology that produces three-dimensional (3D) digital imaging at reduced cost and less radiation for the patient than traditional CT scans. It also delivers faster and easier image acquisition. By providing a 3D representation of the maxillofacial tissues in a cost- and dose-efficient manner, a better preoperative assessment can be obtained for diagnosis and treatment. This comprehensive review presents current applications of CBCT in endodontics. Specific case examples illustrate the difference in treatment planning with traditional periapical radiography versus CBCT technology.

  2. Correlation of computed tomography, magnetic resonance imaging and clinical outcome in acute carbon monoxide poisoning.

    PubMed

    Ozcan, Namik; Ozcam, Giray; Kosar, Pinar; Ozcan, Ayse; Basar, Hulya; Kaymak, Cetin

    2016-01-01

    Carbon monoxide is a toxic gas for humans and is still a silent killer in both developed and developing countries. The aim of this case series was to evaluate early radiological images as a predictor of subsequent neuropsychological sequelae, following carbon monoxide poisoning. After carbon monoxide exposure, early computed tomography scans and magnetic resonance imaging findings of a 52-year-old woman showed bilateral lesions in the globus pallidus. This patient was discharged and followed for 90 days. The patient recovered without any neurological sequela. In a 58-year-old woman exposed to carbon monoxide, computed tomography showed lesions in bilateral globus pallidus and periventricular white matter. Early magnetic resonance imaging revealed changes similar to that like in early tomography images. The patient recovered and was discharged from hospital. On the 27th day of exposure, the patient developed disorientation and memory impairment. Late magnetic resonance imaging showed diffuse hyperintensity in the cerebral white matter. White matter lesions which progress to demyelination and end up in neuropsychological sequelae cannot always be diagnosed by early computed tomography and magnetic resonance imaging in carbon monoxide poisoning. Copyright © 2014 Sociedade Brasileira de Anestesiologia. Published by Elsevier Editora Ltda. All rights reserved.

  3. Development of a screening tool for staging of diabetic retinopathy in fundus images

    NASA Astrophysics Data System (ADS)

    Dhara, Ashis Kumar; Mukhopadhyay, Sudipta; Bency, Mayur Joseph; Rangayyan, Rangaraj M.; Bansal, Reema; Gupta, Amod

    2015-03-01

    Diabetic retinopathy is a condition of the eye of diabetic patients where the retina is damaged because of long-term diabetes. The condition deteriorates towards irreversible blindness in extreme cases of diabetic retinopathy. Hence, early detection of diabetic retinopathy is important to prevent blindness. Regular screening of fundus images of diabetic patients could be helpful in preventing blindness caused by diabetic retinopathy. In this paper, we propose techniques for staging of diabetic retinopathy in fundus images using several shape and texture features computed from detected microaneurysms, exudates, and hemorrhages. The classification accuracy is reported in terms of the area (Az) under the receiver operating characteristic curve using 200 fundus images from the MESSIDOR database. The value of Az for classifying normal images versus mild, moderate, and severe nonproliferative diabetic retinopathy (NPDR) is 0:9106. The value of Az for classification of mild NPDR versus moderate and severe NPDR is 0:8372. The Az value for classification of moderate NPDR and severe NPDR is 0:9750.

  4. Computational Modeling for Enhancing Soft Tissue Image Guided Surgery: An Application in Neurosurgery.

    PubMed

    Miga, Michael I

    2016-01-01

    With the recent advances in computing, the opportunities to translate computational models to more integrated roles in patient treatment are expanding at an exciting rate. One area of considerable development has been directed towards correcting soft tissue deformation within image guided neurosurgery applications. This review captures the efforts that have been undertaken towards enhancing neuronavigation by the integration of soft tissue biomechanical models, imaging and sensing technologies, and algorithmic developments. In addition, the review speaks to the evolving role of modeling frameworks within surgery and concludes with some future directions beyond neurosurgical applications.

  5. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    NASA Astrophysics Data System (ADS)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  6. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    PubMed Central

    Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  7. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    PubMed

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  8. Computational medical imaging and hemodynamics framework for functional analysis and assessment of cardiovascular structures.

    PubMed

    Wong, Kelvin K L; Wang, Defeng; Ko, Jacky K L; Mazumdar, Jagannath; Le, Thu-Thao; Ghista, Dhanjoo

    2017-03-21

    Cardiac dysfunction constitutes common cardiovascular health issues in the society, and has been an investigation topic of strong focus by researchers in the medical imaging community. Diagnostic modalities based on echocardiography, magnetic resonance imaging, chest radiography and computed tomography are common techniques that provide cardiovascular structural information to diagnose heart defects. However, functional information of cardiovascular flow, which can in fact be used to support the diagnosis of many cardiovascular diseases with a myriad of hemodynamics performance indicators, remains unexplored to its full potential. Some of these indicators constitute important cardiac functional parameters affecting the cardiovascular abnormalities. With the advancement of computer technology that facilitates high speed computational fluid dynamics, the realization of a support diagnostic platform of hemodynamics quantification and analysis can be achieved. This article reviews the state-of-the-art medical imaging and high fidelity multi-physics computational analyses that together enable reconstruction of cardiovascular structures and hemodynamic flow patterns within them, such as of the left ventricle (LV) and carotid bifurcations. The combined medical imaging and hemodynamic analysis enables us to study the mechanisms of cardiovascular disease-causing dysfunctions, such as how (1) cardiomyopathy causes left ventricular remodeling and loss of contractility leading to heart failure, and (2) modeling of LV construction and simulation of intra-LV hemodynamics can enable us to determine the optimum procedure of surgical ventriculation to restore its contractility and health This combined medical imaging and hemodynamics framework can potentially extend medical knowledge of cardiovascular defects and associated hemodynamic behavior and their surgical restoration, by means of an integrated medical image diagnostics and hemodynamic performance analysis framework.

  9. Clinical outcomes of pediatric patients with acute abdominal pain and incidental findings of free intraperitoneal fluid on diagnostic imaging.

    PubMed

    Matz, Samantha; Connell, Mary; Sinha, Madhumita; Goettl, Christopher S; Patel, Palak C; Drachman, David

    2013-09-01

    The presence of free intraperitoneal fluid on diagnostic imaging (sonography or computed tomography [CT]) may indicate an acute inflammatory process in children with abdominal pain in a nontraumatic setting. Although clinical outcomes of pediatric trauma patients with free fluid on diagnostic examinations without evidence of solid-organ injury have been studied, similar studies in the absence of trauma are rare. Our objective was to study clinical outcomes of children with acute abdominal pain of nontraumatic etiology and free intraperitoneal fluid on diagnostic imaging (abdominal/pelvic sonography, CT, or both). We conducted a retrospective review of medical records of children aged 0 to 18 years presenting to a pediatric emergency department with acute abdominal pain (nontraumatic) between April 2008 and March 2009. Patients with intraperitoneal free fluid on imaging were divided into 2 groups: group I, imaging suggestive of an intra-abdominal surgical condition such as appendicitis; and group II, no evidence of an acute surgical condition on imaging, including patients with equivocal studies. Computed tomograms and sonograms were reviewed by a board-certified radiologist, and the free fluid volume was quantitated. Of 1613 patients who underwent diagnostic imaging, 407 were eligible for the study; 134 (33%) had free fluid detected on diagnostic imaging. In patients with both sonography and CT, there was a significant correlation in the free fluid volume (r = 0.79; P < .0005). A significantly greater number of male patients with free fluid had a surgical condition identified on imaging (57.4% versus 25%; P < .001). Children with free fluid and an associated condition on imaging were more likely to have surgery (94.4% versus 6.3%; P < .001). We found clinical outcomes (surgical versus nonsurgical) to be most correlated with a surgical diagnosis on diagnostic imaging and not with the amount of fluid present.

  10. Computer Image Analysis of Histochemically-Labeled Acetylcholinesterase.

    DTIC Science & Technology

    1984-11-30

    image analysis on conjunction with histochemical techniques to describe the distribution of acetylcholinesterase (AChE) activity in nervous and muscular tissue in rats treated with organophosphates (OPs). The objective of the first year of work on this remaining 2 years. We began by adopting a version of the AChE staining method as modified by Hanker, which consistent with the optical properties of our video system. We wrote computer programs for provide a numeric quantity which represents the degree of staining in a tissue section. The staining was calibrated by

  11. Film versus digital cinema: the evolution of moving images

    NASA Astrophysics Data System (ADS)

    Tinker, Michael

    2003-05-01

    Film has been used for movies for over 100 years. D-cinema will soon produce images that equal or surpass film. When that happens, movies will take their basis from computer technology, leading to higher quality, lower cost, and greater flexibility for all aspects of the industry. For example, d-cinema will allow higher frame rates, flexible subtitles, alternative content, and resolutions and color spaces beyond film. Given these opportunities, we must not simply emulate the mechanical past. Insisting, for instance, on a single type of compression or security scheme is misguided. Both these areas are evolving, and we should take advantage of that evolution, while promoting standards that allow that evolution to happen in an orderly way. Mechanical projectors can play back only one kind of film; computer servers can play back any number of formats. It would be wrong to select a single format at this time, only to have the technology become more capable in the near future. We must allow the continuously evolving technology that characterizes computers, not the frozen technology that has characterized mechanical systems. We must work toward an environment that allows interoperability of d-cinema technologies-not systems that limit us to a single technology.

  12. Clinical and mathematical introduction to computer processing of scintigraphic images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goris, M.L.; Briandet, P.A.

    The authors state in their preface:''...we believe that there is no book yet available in which computing in nuclear medicine has been approached in a reasonable manner. This book is our attempt to correct the situation.'' The book is divided into four sections: (1) Clinical Applications of Quantitative Scintigraphic Analysis; (2) Mathematical Derivations; (3) Processing Methods of Scintigraphic Images; and (4) The (Computer) System. Section 1 has chapters on quantitative approaches to congenital and acquired heart diseases, nephrology and urology, and pulmonary medicine.

  13. Plasma cell quantification in bone marrow by computer-assisted image analysis.

    PubMed

    Went, P; Mayer, S; Oberholzer, M; Dirnhofer, S

    2006-09-01

    Minor and major criteria for the diagnosis of multiple meloma according to the definition of the WHO classification include different categories of the bone marrow plasma cell count: a shift from the 10-30% group to the > 30% group equals a shift from a minor to a major criterium, while the < 10% group does not contribute to the diagnosis. Plasma cell fraction in the bone marrow is therefore critical for the classification and optimal clinical management of patients with plasma cell dyscrasias. The aim of this study was (i) to establish a digital image analysis system able to quantify bone marrow plasma cells and (ii) to evaluate two quantification techniques in bone marrow trephines i.e. computer-assisted digital image analysis and conventional light-microscopic evaluation. The results were compared regarding inter-observer variation of the obtained results. Eighty-seven patients, 28 with multiple myeloma, 29 with monoclonal gammopathy of undetermined significance, and 30 with reactive plasmocytosis were included in the study. Plasma cells in H&E- and CD138-stained slides were quantified by two investigators using light-microscopic estimation and computer-assisted digital analysis. The sets of results were correlated with rank correlation coefficients. Patients were categorized according to WHO criteria addressing the plasma cell content of the bone marrow (group 1: 0-10%, group 2: 11-30%, group 3: > 30%), and the results compared by kappa statistics. The degree of agreement in CD138-stained slides was higher for results obtained using the computer-assisted image analysis system compared to light microscopic evaluation (corr.coeff. = 0.782), as was seen in the intra- (corr.coeff. = 0.960) and inter-individual results correlations (corr.coeff. = 0.899). Inter-observer agreement for categorized results (SM/PW: kappa 0.833) was in a high range. Computer-assisted image analysis demonstrated a higher reproducibility of bone marrow plasma cell quantification. This might

  14. Estimation of noise properties for TV-regularized image reconstruction in computed tomography.

    PubMed

    Sánchez, Adrian A

    2015-09-21

    A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 Ă— 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.

  15. Estimation of noise properties for TV-regularized image reconstruction in computed tomography

    NASA Astrophysics Data System (ADS)

    Sánchez, Adrian A.

    2015-09-01

    A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128Ă— 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.

  16. Photoacoustic Imaging for Differential Diagnosis of Benign Polyps versus Malignant Polyps of the Gallbladder: A Preliminary Study.

    PubMed

    Chae, Hee-Dong; Lee, Jae Young; Jang, Jin-Young; Chang, Jin Ho; Kang, Jeeun; Kang, Mee Joo; Han, Joon Koo

    2017-01-01

    To investigate the feasibility of ex vivo multispectral photoacoustic (PA) imaging in differentiating cholesterol versus neoplastic polyps, and benign versus malignant polyps, of the gallbladder. A total of 38 surgically confirmed gallbladder polyps (24 cholesterol polyps, 4 adenomas, and 10 adenocarcinomas) from 38 patients were prospectively included in this study. The surgical specimens were set on a gel pad immersed in a saline-filled container. The PA intensities of polyps were then measured, using two separate wavelength intervals (421-647 nm and 692-917 nm). Mann-Whitney U test was performed for the comparison of normalized PA intensities between the cholesterol and neoplastic polyps, and between the benign and malignant polyps. Kruskal-Wallis test was conducted for the comparison of normalized PA intensities among the cholesterol polyps, adenomas, and adenocarcinomas. A significant difference was observed in the normalized PA intensities between the cholesterol and neoplastic polyps at 459 nm (median, 1.00 vs. 0.73; p = 0.032). Comparing the benign and malignant polyps, there were significant differences in the normalized PA intensities at 765 nm (median, 0.67 vs. 0.78; p = 0.013), 787 nm (median, 0.65 vs. 0.77; p = 0.034), and 853 nm (median, 0.59 vs. 0.85; p = 0.028). The comparison of the normalized PA intensities among cholesterol polyps, adenomas, and adenocarcinomas demonstrated marginally significant differences at 765 nm (median, 0.67 vs. 0.66 vs. 0.78, respectively; p = 0.049). These preliminary results indicate that benign versus malignant gallbladder polyps might exhibit different spectral patterns on multispectral PA imaging.

  17. Ultrasonography versus computed tomography for suspected nephrolithiasis.

    PubMed

    Smith-Bindman, Rebecca; Aubin, Chandra; Bailitz, John; Bengiamin, Rimon N; Camargo, Carlos A; Corbo, Jill; Dean, Anthony J; Goldstein, Ruth B; Griffey, Richard T; Jay, Gregory D; Kang, Tarina L; Kriesel, Dana R; Ma, O John; Mallin, Michael; Manson, William; Melnikow, Joy; Miglioretti, Diana L; Miller, Sara K; Mills, Lisa D; Miner, James R; Moghadassi, Michelle; Noble, Vicki E; Press, Gregory M; Stoller, Marshall L; Valencia, Victoria E; Wang, Jessica; Wang, Ralph C; Cummings, Steven R

    2014-09-18

    There is a lack of consensus about whether the initial imaging method for patients with suspected nephrolithiasis should be computed tomography (CT) or ultrasonography. In this multicenter, pragmatic, comparative effectiveness trial, we randomly assigned patients 18 to 76 years of age who presented to the emergency department with suspected nephrolithiasis to undergo initial diagnostic ultrasonography performed by an emergency physician (point-of-care ultrasonography), ultrasonography performed by a radiologist (radiology ultrasonography), or abdominal CT. Subsequent management, including additional imaging, was at the discretion of the physician. We compared the three groups with respect to the 30-day incidence of high-risk diagnoses with complications that could be related to missed or delayed diagnosis and the 6-month cumulative radiation exposure. Secondary outcomes were serious adverse events, related serious adverse events (deemed attributable to study participation), pain (assessed on an 11-point visual-analogue scale, with higher scores indicating more severe pain), return emergency department visits, hospitalizations, and diagnostic accuracy. A total of 2759 patients underwent randomization: 908 to point-of-care ultrasonography, 893 to radiology ultrasonography, and 958 to CT. The incidence of high-risk diagnoses with complications in the first 30 days was low (0.4%) and did not vary according to imaging method. The mean 6-month cumulative radiation exposure was significantly lower in the ultrasonography groups than in the CT group (P<0.001). Serious adverse events occurred in 12.4% of the patients assigned to point-of-care ultrasonography, 10.8% of those assigned to radiology ultrasonography, and 11.2% of those assigned to CT (P=0.50). Related adverse events were infrequent (incidence, 0.4%) and similar across groups. By 7 days, the average pain score was 2.0 in each group (P=0.84). Return emergency department visits, hospitalizations, and diagnostic

  18. Computed Tomography Perfusion Imaging for the Diagnosis of Hepatic Alveolar Echinococcosis

    PubMed Central

    Sade, Recep; Kantarci, Mecit; Genc, Berhan; Ogul, Hayri; Gundogdu, Betul; Yilmaz, Omer

    2018-01-01

    Objective: Alveolar echinococcosis (AE) is a rare life-threatening parasitic infection. Computed tomography perfusion (CTP) imaging has the potential to provide both quantitative and qualitative information about the tissue perfusion characteristics. The purpose of this study was the examination of the characteristic features and feasibility of CTP in AE liver lesions. Material and Methods: CTP scanning was performed in 25 patients who had a total of 35 lesions identified as AE of the liver. Blood flow (BF), blood volume (BV), portal venous perfusion (PVP), arterial liver perfusion (ALP), and hepatic perfusion indexes (HPI) were computed for background liver parenchyma and each AE lesion. Results: Significant differences were detected between perfusion values of the AE lesions and background liver tissue. The BV, BF, ALP, and PVP values for all components of the AE liver lesions were significantly lower than the normal liver parenchyma (p<0.01). Conclusions: We suggest that perfusion imaging can be used in AE of the liver. Thus, the quantitative knowledge of perfusion parameters are obtained via CT perfusion imaging. PMID:29531482

  19. Image-guided drainage versus antibiotic-only treatment of pelvic abscesses: short-term and long-term outcomes.

    PubMed

    To, Justin; Aldape, Diana; Frost, Andrei; Goldberg, Gary L; Levie, Mark; Chudnoff, Scott

    2014-10-01

    To determine the efficacy of image-guided drainage versus antibiotic-only treatment of pelvic abscesses. Retrospective cohort analysis. An academic, inner-city medical center. Women ages 11-49, admitted between 1998 and 2008 with ICD9 code 614.x (inflammatory diseases of ovary, fallopian tube, pelvic cellular tissue, and peritoneum). Medical records search, chart review, and phone survey. Surgical intervention. We identified 6,151 initial patients, of whom 240 patients met inclusion criteria. Of the included patients, 199 women received antibiotic-only treatment, and 41 received additional image-guided drainage. There was no statistically significant difference between the two groups in terms of age, body mass index, parity, incidence of diabetes, obesity, endometriosis, or history of sexually transmitted infection excluding human immunodeficiency virus (HIV). Abscesses in the drainage cohort were noted to be larger in dimension (5.9 cm vs. 8.5 cm); 16.1% of patients who received antibiotics alone required surgical intervention versus only 2.4% of the drainage cohort. Patients who received drainage had longer hospital stays, but the time from treatment to discharge was similar in both groups (7.4 days vs. 6.7 days). We successfully contacted 150 patients, and the differences in long-term pregnancy outcomes, pain, or infertility were not statistically significant. Patients who received antibiotics alone were more likely to require further surgical intervention when compared with patients who additionally received image-guided drainage. There were no observable long-term differences. Copyright © 2014 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  20. Single-pixel computational ghost imaging with helicity-dependent metasurface hologram.

    PubMed

    Liu, Hong-Chao; Yang, Biao; Guo, Qinghua; Shi, Jinhui; Guan, Chunying; Zheng, Guoxing; MĂĽhlenbernd, Holger; Li, Guixin; Zentgraf, Thomas; Zhang, Shuang

    2017-09-01

    Different optical imaging techniques are based on different characteristics of light. By controlling the abrupt phase discontinuities with different polarized incident light, a metasurface can host a phase-only and helicity-dependent hologram. In contrast, ghost imaging (GI) is an indirect imaging modality to retrieve the object information from the correlation of the light intensity fluctuations. We report single-pixel computational GI with a high-efficiency reflective metasurface in both simulations and experiments. Playing a fascinating role in switching the GI target with different polarized light, the metasurface hologram generates helicity-dependent reconstructed ghost images and successfully introduces an additional security lock in a proposed optical encryption scheme based on the GI. The robustness of our encryption scheme is further verified with the vulnerability test. Building the first bridge between the metasurface hologram and the GI, our work paves the way to integrate their applications in the fields of optical communications, imaging technology, and security.

  1. Single-pixel computational ghost imaging with helicity-dependent metasurface hologram

    PubMed Central

    Liu, Hong-Chao; Yang, Biao; Guo, Qinghua; Shi, Jinhui; Guan, Chunying; Zheng, Guoxing; MĂĽhlenbernd, Holger; Li, Guixin; Zentgraf, Thomas; Zhang, Shuang

    2017-01-01

    Different optical imaging techniques are based on different characteristics of light. By controlling the abrupt phase discontinuities with different polarized incident light, a metasurface can host a phase-only and helicity-dependent hologram. In contrast, ghost imaging (GI) is an indirect imaging modality to retrieve the object information from the correlation of the light intensity fluctuations. We report single-pixel computational GI with a high-efficiency reflective metasurface in both simulations and experiments. Playing a fascinating role in switching the GI target with different polarized light, the metasurface hologram generates helicity-dependent reconstructed ghost images and successfully introduces an additional security lock in a proposed optical encryption scheme based on the GI. The robustness of our encryption scheme is further verified with the vulnerability test. Building the first bridge between the metasurface hologram and the GI, our work paves the way to integrate their applications in the fields of optical communications, imaging technology, and security. PMID:28913433

  2. Computer-Based Image Analysis for Plus Disease Diagnosis in Retinopathy of Prematurity

    PubMed Central

    Wittenberg, Leah A.; Jonsson, Nina J.; Chan, RV Paul; Chiang, Michael F.

    2014-01-01

    Presence of plus disease in retinopathy of prematurity (ROP) is an important criterion for identifying treatment-requiring ROP. Plus disease is defined by a standard published photograph selected over 20 years ago by expert consensus. However, diagnosis of plus disease has been shown to be subjective and qualitative. Computer-based image analysis, using quantitative methods, has potential to improve the objectivity of plus disease diagnosis. The objective was to review the published literature involving computer-based image analysis for ROP diagnosis. The PubMed and Cochrane library databases were searched for the keywords “retinopathy of prematurity” AND “image analysis” AND/OR “plus disease.” Reference lists of retrieved articles were searched to identify additional relevant studies. All relevant English-language studies were reviewed. There are four main computer-based systems, ROPtool (AU ROC curve, plus tortuosity 0.95, plus dilation 0.87), RISA (AU ROC curve, arteriolar TI 0.71, venular diameter 0.82), Vessel Map (AU ROC curve, arteriolar dilation 0.75, venular dilation 0.96), and CAIAR (AU ROC curve, arteriole tortuosity 0.92, venular dilation 0.91), attempting to objectively analyze vessel tortuosity and dilation in plus disease in ROP. Some of them show promise for identification of plus disease using quantitative methods. This has potential to improve the diagnosis of plus disease, and may contribute to the management of ROP using both traditional binocular indirect ophthalmoscopy and image-based telemedicine approaches. PMID:21366159

  3. Quantum dots versus organic fluorophores in fluorescent deep-tissue imaging--merits and demerits.

    PubMed

    Bakalova, Rumiana; Zhelev, Zhivko; Gadjeva, Veselina

    2008-12-01

    The use of fluorescence in deep-tissue imaging is rapidly expanding in last several years. The progress in fluorescent molecular probes and fluorescent imaging techniques gives an opportunity to detect single cells and even molecular targets in live organisms. The highly sensitive and high-speed fluorescent molecular sensors and detection devices allow the application of fluorescence in functional imaging. With the development of novel bright fluorophores based on nanotechnologies and 3D fluorescence scanners with high spatial and temporal resolution, the fluorescent imaging has a potential to become an alternative of the other non-invasive imaging techniques as magnetic resonance imaging, positron-emission tomography, X-ray, computing tomography. The fluorescent imaging has also a potential to give a real map of human anatomy and physiology. The current review outlines the advantages of fluorescent nanoparticles over conventional organic dyes in deep-tissue imaging in vivo and defines the major requirements to the "perfect fluorophore". The analysis proceeds from the basic principles of fluorescence and major characteristics of fluorophores, light-tissue interactions, and major limitations of fluorescent deep-tissue imaging. The article is addressed to a broad readership - from specialists in this field to university students.

  4. Use of Computer Imaging in Rhinoplasty: A Survey of the Practices of Facial Plastic Surgeons.

    PubMed

    Singh, Prabhjyot; Pearlman, Steven

    2017-08-01

    The objective of this study was to quantify the use of computer imaging by facial plastic surgeons. AAFPRS Facial plastic surgeons were surveyed about their use of computer imaging during rhinoplasty consultations. The survey collected information about surgeon demographics, practice settings, practice patterns, and rates of computer imaging (CI) for primary and revision rhinoplasty. For those surgeons who used CI, additional information was also collected, which included who performed the imaging and whether the patient was given the morphed images after the consultation. A total of 238 out of 1200 (19.8%) facial plastic surgeons responded to the survey. Out of those who responded, 195 surgeons (83%) were board certified by the American Board of Facial Plastic and Reconstructive Surgeons (ABFPRS). The majority of respondents (150 surgeons, 63%) used CI during rhinoplasty consultation. Of the surgeons who use CI, 92% performed the image morphing themselves. Approximately two-thirds of surgeons who use CI gave their patient a printout of the morphed images after the consultation. Computer imaging (CI) is a frequently utilized tool for facial plastic surgeons during cosmetic consultations with patients. Based on these results of this study, it can be suggested that the majority of facial plastic surgeons who use CI do so for both primary and revision rhinoplasty. As more sophisticated systems become available, it is possible that utilization of CI modalities will increase. This provides the surgeon with further tools to use at his or her disposal during discussion of aesthetic surgery. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  5. Motion artifact detection in four-dimensional computed tomography images

    NASA Astrophysics Data System (ADS)

    Bouilhol, G.; Ayadi, M.; Pinho, R.; Rit, S.; Sarrut, D.

    2014-03-01

    Motion artifacts appear in four-dimensional computed tomography (4DCT) images because of suboptimal acquisition parameters or patient breathing irregularities. Frequency of motion artifacts is high and they may introduce errors in radiation therapy treatment planning. Motion artifact detection can be useful for image quality assessment and 4D reconstruction improvement but manual detection in many images is a tedious process. We propose a novel method to evaluate the quality of 4DCT images by automatic detection of motion artifacts. The method was used to evaluate the impact of the optimization of acquisition parameters on image quality at our institute. 4DCT images of 114 lung cancer patients were analyzed. Acquisitions were performed with a rotation period of 0.5 seconds and a pitch of 0.1 (74 patients) or 0.081 (40 patients). A sensitivity of 0.70 and a specificity of 0.97 were observed. End-exhale phases were less prone to motion artifacts. In phases where motion speed is high, the number of detected artifacts was systematically reduced with a pitch of 0.081 instead of 0.1 and the mean reduction was 0.79. The increase of the number of patients with no artifact detected was statistically significant for the 10%, 70% and 80% respiratory phases, indicating a substantial image quality improvement.

  6. Image calibration and registration in cone-beam computed tomogram for measuring the accuracy of computer-aided implant surgery

    NASA Astrophysics Data System (ADS)

    Lam, Walter Y. H.; Ngan, Henry Y. T.; Wat, Peter Y. P.; Luk, Henry W. K.; Goto, Tazuko K.; Pow, Edmond H. N.

    2015-02-01

    Medical radiography is the use of radiation to "see through" a human body without breaching its integrity (surface). With computed tomography (CT)/cone beam computed tomography (CBCT), three-dimensional (3D) imaging can be produced. These imagings not only facilitate disease diagnosis but also enable computer-aided surgical planning/navigation. In dentistry, the common method for transfer of the virtual surgical planning to the patient (reality) is the use of surgical stent either with a preloaded planning (static) like a channel or a real time surgical navigation (dynamic) after registration with fiducial markers (RF). This paper describes using the corner of a cube as a radiopaque fiducial marker on an acrylic (plastic) stent, this RF allows robust calibration and registration of Cartesian (x, y, z)- coordinates for linking up the patient (reality) and the imaging (virtuality) and hence the surgical planning can be transferred in either static or dynamic way. The accuracy of computer-aided implant surgery was measured with reference to coordinates. In our preliminary model surgery, a dental implant was planned virtually and placed with preloaded surgical guide. The deviation of the placed implant apex from the planning was x=+0.56mm [more right], y=- 0.05mm [deeper], z=-0.26mm [more lingual]) which was within clinically 2mm safety range. For comparison with the virtual planning, the physically placed implant was CT/CBCT scanned and errors may be introduced. The difference of the actual implant apex to the virtual apex was x=0.00mm, y=+0.21mm [shallower], z=-1.35mm [more lingual] and this should be brought in mind when interpret the results.

  7. Angle-corrected imaging transcranial doppler sonography versus imaging and nonimaging transcranial doppler sonography in children with sickle cell disease.

    PubMed

    Krejza, J; Rudzinski, W; Pawlak, M A; Tomaszewski, M; Ichord, R; Kwiatkowski, J; Gor, D; Melhem, E R

    2007-09-01

    Nonimaging transcranial Doppler sonography (TCD) and imaging TCD (TCDI) are used for determination of the risk of stroke in children with sickle cell disease (SCD). The purpose was to compare angle-corrected, uncorrected TCDI, and TCD blood flow velocities in children with SCD. A total of 37 children (mean age, 7.8 +/- 3.0 years) without intracranial arterial narrowing determined with MR angiography, were studied with use of TCD and TCDI at the same session. Depth of insonation and TCDI mean velocities with and without correction for the angle of insonation in the terminal internal carotid artery (ICA) and middle (MCA), anterior (ACA), and posterior (PCA) cerebral arteries were compared with TCD velocities with use of a paired t test. Two arteries were not found on TCDI compared with 15 not found on TCD. Average angle of insonation in the MCA, ACA, ICA, and PCA was 31 degrees , 44 degrees , 25 degrees , and 29 degrees , respectively. TCDI and TCD mean depth of insonation for all arteries did not differ significantly; however, individual differences varied substantially. TCDI velocities were significantly lower than TCD velocities, respectively, for the right and left sides (mean +/- SD): MCA, 106 +/- 22 cm/s and 111 +/- 33 cm/s versus 130 +/- 19 cm/s and 134 +/- 26 cm/s; ICA, 90 +/- 14 cm/s and 98 +/- 27 cm/s versus 117 +/- 18 cm/s and 119 +/- 23 cm/s; ACA, 74 +/- 24 cm/s and 88 +/- 25 cm/s versus 105 +/- 23 cm/s and 105 +/- 31 cm/s; and PCA, 84 +/- 27 cm/s and 82 +/- 21 cm/s versus 95 +/- 23 cm/s and 94 +/- 20 cm/s. TCD and angle-corrected TCDI velocities were not statistically different except for higher angle-corrected TCDI values in the left ACA and right PCA. TCD velocities are significantly higher than TCDI velocities but are not different from the angle-corrected TCDI velocities. TCDI identifies the major intracranial arteries more effectively than TCD.

  8. Comparative use of the computer-aided angiography and rapid prototyping technology versus conventional imaging in the management of the Tile C pelvic fractures.

    PubMed

    Li, Baofeng; Chen, Bei; Zhang, Ying; Wang, Xinyu; Wang, Fei; Xia, Hong; Yin, Qingshui

    2016-01-01

    Computed tomography (CT) scan with three-dimensional (3D) reconstruction has been used to evaluate complex fractures in pre-operative planning. In this study, rapid prototyping of a life-size model based on 3D reconstructions including bone and vessel was applied to evaluate the feasibility and prospect of these new technologies in surgical therapy of Tile C pelvic fractures by observing intra- and perioperative outcomes. The authors conducted a retrospective study on a group of 157 consecutive patients with Tile C pelvic fractures. Seventy-six patients were treated with conventional pre-operative preparation (A group) and 81 patients were treated with the help of computer-aided angiography and rapid prototyping technology (B group). Assessment of the two groups considered the following perioperative parameters: length of surgical procedure, intra-operative complications, intra- and postoperative blood loss, postoperative pain, postoperative nausea and vomiting (PONV), length of stay, and type of discharge. The two groups were homogeneous when compared in relation to mean age, sex, body weight, injury severity score, associated injuries and pelvic fracture severity score. Group B was performed in less time (105 ± 19 minutes vs. 122 ± 23 minutes) and blood loss (31.0 ± 8.2 g/L vs. 36.2 ± 7.4 g/L) compared with group A. Patients in group B experienced less pain (2.5 ± 2.3 NRS score vs. 2.8 ± 2.0 NRS score), and PONV affected only 8 % versus 10 % of cases. Times to discharge were shorter (7.8 ± 2.0 days vs. 10.2 ± 3.1 days) in group B, and most of patients were discharged to home. In our study, patients of Tile C pelvic fractures treated with computer-aided angiography and rapid prototyping technology had a better perioperative outcome than patients treated with conventional pre-operative preparation. Further studies are necessary to investigate the advantages in terms of clinical results in the short and long run.

  9. Exploitation of realistic computational anthropomorphic phantoms for the optimization of nuclear imaging acquisition and processing protocols.

    PubMed

    Loudos, George K; Papadimitroulas, Panagiotis G; Kagadis, George C

    2014-01-01

    Monte Carlo (MC) simulations play a crucial role in nuclear medical imaging since they can provide the ground truth for clinical acquisitions, by integrating and quantifing all physical parameters that affect image quality. The last decade a number of realistic computational anthropomorphic models have been developed to serve imaging, as well as other biomedical engineering applications. The combination of MC techniques with realistic computational phantoms can provide a powerful tool for pre and post processing in imaging, data analysis and dosimetry. This work aims to create a global database for simulated Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) exams and the methodology, as well as the first elements are presented. Simulations are performed using the well validated GATE opensource toolkit, standard anthropomorphic phantoms and activity distribution of various radiopharmaceuticals, derived from literature. The resulting images, projections and sinograms of each study are provided in the database and can be further exploited to evaluate processing and reconstruction algorithms. Patient studies using different characteristics are included in the database and different computational phantoms were tested for the same acquisitions. These include the XCAT, Zubal and the Virtual Family, which some of which are used for the first time in nuclear imaging. The created database will be freely available and our current work is towards its extension by simulating additional clinical pathologies.

  10. Flat holographic stereograms synthesized from computer-generated images by using LiNbO3 crystal

    NASA Astrophysics Data System (ADS)

    Qu, Zhi-Min; Liu, Jinsheng; Xu, Liangying

    1991-02-01

    In this paper we used a novel method for synthesizing computer gene rated images in which by means of a series of intermediate holograms recorded on Fe--doped LiNbO crystals a high quality flat stereograni with wide view angle and much deep 3D image ha been obtained. 2. INTRODUCTITJN As we all know the conventional holography is very limited. With the help of a contineous wave laser only stationary objects can be re corded due tO its insufficient power. Although some moving objects could be recorded by a pulsed laser the dimensions and kinds of object are restricted. If we would like to see a imaginary object or a three dimensional image designed by computer it is very difficult by means of above conventional holography. Of course if we have a two-dimensional image on a comouter screen we can rotate it to give a three-dimensional perspective but we can never really see it as a solid. However flat holographic stereograrns synthesized from computer generated images will make one directly see the comoute results in the form of 3D image. Obviously it will have wide applications in design architecture medicine education and arts. 406 / SPIE Vol. 1238 Three-Dimensional Holography: Science Culture Education (1989)

  11. Syringeless power injector versus dual-syringe power injector: economic evaluation of user performance, the impact on contrast enhanced computed tomography (CECT) workflow exams, and hospital costs.

    PubMed

    Colombo, Giorgio L; Andreis, Ivo A Bergamo; Di Matteo, Sergio; Bruno, Giacomo M; Mondellini, Claudio

    2013-01-01

    The utilization of diagnostic imaging has substantially increased over the past decade in Europe and North America and continues to grow worldwide. The purpose of this study was to develop an economic evaluation of a syringeless power injector (PI) versus a dual-syringe PI for contrast enhanced computed tomography (CECT) in a hospital setting. Patients (n=2379) were enrolled at the Legnano Hospital between November 2012 and January 2013. They had been referred to the hospital for a CECT analysis and were randomized into two groups. The first group was examined with a 256-MDCT (MultiDetector Computed Tomography) scanner using a syringeless power injector, while the other group was examined with a 64-MDCT scanner using a dual-syringe. Data on the operators' time required in the patient analysis steps as well as on the quantity of consumable materials used were collected. The radiologic technologists' satisfaction with the use of the PIs was rated on a 10-point scale. A budget impact analysis and sensitivity analysis were performed under the base-case scenario. A total of 1,040 patients were examined using the syringeless system, and 1,339 with the dual-syringe system; the CECT examination quality was comparable for both PI systems. Equipment preparation time and releasing time per examination for syringeless PIs versus dual-syringe PIs were 100±30 versus 180±30 seconds and 90±30 and 140±20 seconds, respectively. On average, 10±3 mL of contrast media (CM) wastage per examination was observed with the dual-syringe PI and 0±1 mL with the syringeless PI. Technologists had higher satisfaction with the syringeless PI than with the dual-syringe system (8.8 versus 8.0). The syringeless PI allows a saving of about €6.18 per patient, both due to the lower cost of the devices and to the better performance of the syringeless system. The univariate sensitivity analysis carried out on the base-case results within the standard deviation range confirmed the saving generated

  12. Estimation of Noise Properties for TV-regularized Image Reconstruction in Computed Tomography

    PubMed Central

    Sánchez, Adrian A.

    2016-01-01

    A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 Ă— 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR. PMID:26308968

  13. Automated egg grading system using computer vision: Investigation on weight measure versus shape parameters

    NASA Astrophysics Data System (ADS)

    Nasir, Ahmad Fakhri Ab; Suhaila Sabarudin, Siti; Majeed, Anwar P. P. Abdul; Ghani, Ahmad Shahrizan Abdul

    2018-04-01

    Chicken egg is a source of food of high demand by humans. Human operators cannot work perfectly and continuously when conducting egg grading. Instead of an egg grading system using weight measure, an automatic system for egg grading using computer vision (using egg shape parameter) can be used to improve the productivity of egg grading. However, early hypothesis has indicated that more number of egg classes will change when using egg shape parameter compared with using weight measure. This paper presents the comparison of egg classification by the two above-mentioned methods. Firstly, 120 images of chicken eggs of various grades (A–D) produced in Malaysia are captured. Then, the egg images are processed using image pre-processing techniques, such as image cropping, smoothing and segmentation. Thereafter, eight egg shape features, including area, major axis length, minor axis length, volume, diameter and perimeter, are extracted. Lastly, feature selection (information gain ratio) and feature extraction (principal component analysis) are performed using k-nearest neighbour classifier in the classification process. Two methods, namely, supervised learning (using weight measure as graded by egg supplier) and unsupervised learning (using egg shape parameters as graded by ourselves), are conducted to execute the experiment. Clustering results reveal many changes in egg classes after performing shape-based grading. On average, the best recognition results using shape-based grading label is 94.16% while using weight-based label is 44.17%. As conclusion, automated egg grading system using computer vision is better by implementing shape-based features since it uses image meanwhile the weight parameter is more suitable by using weight grading system.

  14. Iliac screw fixation using computer-assisted computer tomographic image guidance: technical note.

    PubMed

    Shin, John H; Hoh, Daniel J; Kalfas, Iain H

    2012-03-01

    Iliac screw fixation is a powerful tool used by spine surgeons to achieve fusion across the lumbosacral junction for a number of indications, including deformity, tumor, and pseudarthrosis. Complications associated with screw placement are related to blind trajectory selection and excessive soft tissue dissection. To describe the technique of iliac screw fixation using computed tomographic (CT)-based image guidance. Intraoperative registration and verification of anatomic landmarks are performed with the use of a preoperatively acquired CT of the lumbosacral spine. With the navigation probe, the ideal starting point for screw placement is selected while visualizing the intended trajectory and target on a computer screen. Once the starting point is selected and marked with a burr, a drill guide is docked within this point and the navigation probe re-inserted, confirming the trajectory. The probe is then removed and the high-speed drill reinserted within the drill guide. Drilling is performed to a depth measured on the computer screen and a screw is placed. Confirmation of accurate placement of iliac screws can be performed with standard radiographs. CT-guided navigation allows for 3-dimensional visualization of the pelvis and minimizes complications associated with soft-tissue dissection and breach of the ilium during screw placement.

  15. Radio Synthesis Imaging - A High Performance Computing and Communications Project

    NASA Astrophysics Data System (ADS)

    Crutcher, Richard M.

    The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long

  16. Biomechanical Model for Computing Deformations for Whole-Body Image Registration: A Meshless Approach

    PubMed Central

    Li, Mao; Miller, Karol; Joldes, Grand Roman; Kikinis, Ron; Wittek, Adam

    2016-01-01

    Patient-specific biomechanical models have been advocated as a tool for predicting deformations of soft body organs/tissue for medical image registration (aligning two sets of images) when differences between the images are large. However, complex and irregular geometry of the body organs makes generation of patient-specific biomechanical models very time consuming. Meshless discretisation has been proposed to solve this challenge. However, applications so far have been limited to 2-D models and computing single organ deformations. In this study, 3-D comprehensive patient-specific non-linear biomechanical models implemented using Meshless Total Lagrangian Explicit Dynamics (MTLED) algorithms are applied to predict a 3-D deformation field for whole-body image registration. Unlike a conventional approach which requires dividing (segmenting) the image into non-overlapping constituents representing different organs/tissues, the mechanical properties are assigned using the Fuzzy C-Means (FCM) algorithm without the image segmentation. Verification indicates that the deformations predicted using the proposed meshless approach are for practical purposes the same as those obtained using the previously validated finite element models. To quantitatively evaluate the accuracy of the predicted deformations, we determined the spatial misalignment between the registered (i.e. source images warped using the predicted deformations) and target images by computing the edge-based Hausdorff distance. The Hausdorff distance-based evaluation determines that our meshless models led to successful registration of the vast majority of the image features. PMID:26791945

  17. Biomechanical model for computing deformations for whole-body image registration: A meshless approach.

    PubMed

    Li, Mao; Miller, Karol; Joldes, Grand Roman; Kikinis, Ron; Wittek, Adam

    2016-12-01

    Patient-specific biomechanical models have been advocated as a tool for predicting deformations of soft body organs/tissue for medical image registration (aligning two sets of images) when differences between the images are large. However, complex and irregular geometry of the body organs makes generation of patient-specific biomechanical models very time-consuming. Meshless discretisation has been proposed to solve this challenge. However, applications so far have been limited to 2D models and computing single organ deformations. In this study, 3D comprehensive patient-specific nonlinear biomechanical models implemented using meshless Total Lagrangian explicit dynamics algorithms are applied to predict a 3D deformation field for whole-body image registration. Unlike a conventional approach that requires dividing (segmenting) the image into non-overlapping constituents representing different organs/tissues, the mechanical properties are assigned using the fuzzy c-means algorithm without the image segmentation. Verification indicates that the deformations predicted using the proposed meshless approach are for practical purposes the same as those obtained using the previously validated finite element models. To quantitatively evaluate the accuracy of the predicted deformations, we determined the spatial misalignment between the registered (i.e. source images warped using the predicted deformations) and target images by computing the edge-based Hausdorff distance. The Hausdorff distance-based evaluation determines that our meshless models led to successful registration of the vast majority of the image features. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Comparison of SeaWinds Backscatter Imaging Algorithms

    PubMed Central

    Long, David G.

    2017-01-01

    This paper compares the performance and tradeoffs of various backscatter imaging algorithms for the SeaWinds scatterometer when multiple passes over a target are available. Reconstruction methods are compared with conventional gridding algorithms. In particular, the performance and tradeoffs in conventional â€drop in the bucket’ (DIB) gridding at the intrinsic sensor resolution are compared to high-spatial-resolution imaging algorithms such as fine-resolution DIB and the scatterometer image reconstruction (SIR) that generate enhanced-resolution backscatter images. Various options for each algorithm are explored, including considering both linear and dB computation. The effects of sampling density and reconstruction quality versus time are explored. Both simulated and actual data results are considered. The results demonstrate the effectiveness of high-resolution reconstruction using SIR as well as its limitations and the limitations of DIB and fDIB. PMID:28828143

  19. Computer-aided diagnosis of liver tumors on computed tomography images.

    PubMed

    Chang, Chin-Chen; Chen, Hong-Hao; Chang, Yeun-Chung; Yang, Ming-Yang; Lo, Chung-Ming; Ko, Wei-Chun; Lee, Yee-Fan; Liu, Kao-Lang; Chang, Ruey-Feng

    2017-07-01

    Liver cancer is the tenth most common cancer in the USA, and its incidence has been increasing for several decades. Early detection, diagnosis, and treatment of the disease are very important. Computed tomography (CT) is one of the most common and robust imaging techniques for the detection of liver cancer. CT scanners can provide multiple-phase sequential scans of the whole liver. In this study, we proposed a computer-aided diagnosis (CAD) system to diagnose liver cancer using the features of tumors obtained from multiphase CT images. A total of 71 histologically-proven liver tumors including 49 benign and 22 malignant lesions were evaluated with the proposed CAD system to evaluate its performance. Tumors were identified by the user and then segmented using a region growing algorithm. After tumor segmentation, three kinds of features were obtained for each tumor, including texture, shape, and kinetic curve. The texture was quantified using 3 dimensional (3-D) texture data of the tumor based on the grey level co-occurrence matrix (GLCM). Compactness, margin, and an elliptic model were used to describe the 3-D shape of the tumor. The kinetic curve was established from each phase of tumor and represented as variations in density between each phase. Backward elimination was used to select the best combination of features, and binary logistic regression analysis was used to classify the tumors with leave-one-out cross validation. The accuracy and sensitivity for the texture were 71.82% and 68.18%, respectively, which were better than for the shape and kinetic curve under closed specificity. Combining all of the features achieved the highest accuracy (58/71, 81.69%), sensitivity (18/22, 81.82%), and specificity (40/49, 81.63%). The Az value of combining all features was 0.8713. Combining texture, shape, and kinetic curve features may be able to differentiate benign from malignant tumors in the liver using our proposed CAD system. Copyright © 2017 Elsevier B.V. All

  20. Advanced Computer Image Generation Techniques Exploiting Perceptual Characteristics. Final Report.

    ERIC Educational Resources Information Center

    Stenger, Anthony J.; And Others

    This study suggests and identifies computer image generation (CIG) algorithms for visual simulation that improve the training effectiveness of CIG simulators and identifies areas of basic research in visual perception that are significant for improving CIG technology. The first phase of the project entailed observing three existing CIG simulators.…

  1. Technical Note: Image filtering to make computer-aided detection robust to image reconstruction kernel choice in lung cancer CT screening

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohkubo, Masaki, E-mail: mook@clg.niigata-u.ac.jp

    Purpose: In lung cancer computed tomography (CT) screening, the performance of a computer-aided detection (CAD) system depends on the selection of the image reconstruction kernel. To reduce this dependence on reconstruction kernels, the authors propose a novel application of an image filtering method previously proposed by their group. Methods: The proposed filtering process uses the ratio of modulation transfer functions (MTFs) of two reconstruction kernels as a filtering function in the spatial-frequency domain. This method is referred to as MTF{sub ratio} filtering. Test image data were obtained from CT screening scans of 67 subjects who each had one nodule. Imagesmore » were reconstructed using two kernels: f{sub STD} (for standard lung imaging) and f{sub SHARP} (for sharp edge-enhancement lung imaging). The MTF{sub ratio} filtering was implemented using the MTFs measured for those kernels and was applied to the reconstructed f{sub SHARP} images to obtain images that were similar to the f{sub STD} images. A mean filter and a median filter were applied (separately) for comparison. All reconstructed and filtered images were processed using their prototype CAD system. Results: The MTF{sub ratio} filtered images showed excellent agreement with the f{sub STD} images. The standard deviation for the difference between these images was very small, âĽ6.0 Hounsfield units (HU). However, the mean and median filtered images showed larger differences of âĽ48.1 and âĽ57.9 HU from the f{sub STD} images, respectively. The free-response receiver operating characteristic (FROC) curve for the f{sub SHARP} images indicated poorer performance compared with the FROC curve for the f{sub STD} images. The FROC curve for the MTF{sub ratio} filtered images was equivalent to the curve for the f{sub STD} images. However, this similarity was not achieved by using the mean filter or median filter. Conclusions: The accuracy of MTF{sub ratio} image filtering was verified and the method

  2. Computer-Based Image Analysis for Plus Disease Diagnosis in Retinopathy of Prematurity: Performance of the "i-ROP" System and Image Features Associated With Expert Diagnosis.

    PubMed

    Ataer-Cansizoglu, Esra; Bolon-Canedo, Veronica; Campbell, J Peter; Bozkurt, Alican; Erdogmus, Deniz; Kalpathy-Cramer, Jayashree; Patel, Samir; Jonas, Karyn; Chan, R V Paul; Ostmo, Susan; Chiang, Michael F

    2015-11-01

    We developed and evaluated the performance of a novel computer-based image analysis system for grading plus disease in retinopathy of prematurity (ROP), and identified the image features, shapes, and sizes that best correlate with expert diagnosis. A dataset of 77 wide-angle retinal images from infants screened for ROP was collected. A reference standard diagnosis was determined for each image by combining image grading from 3 experts with the clinical diagnosis from ophthalmoscopic examination. Manually segmented images were cropped into a range of shapes and sizes, and a computer algorithm was developed to extract tortuosity and dilation features from arteries and veins. Each feature was fed into our system to identify the set of characteristics that yielded the highest-performing system compared to the reference standard, which we refer to as the "i-ROP" system. Among the tested crop shapes, sizes, and measured features, point-based measurements of arterial and venous tortuosity (combined), and a large circular cropped image (with radius 6 times the disc diameter), provided the highest diagnostic accuracy. The i-ROP system achieved 95% accuracy for classifying preplus and plus disease compared to the reference standard. This was comparable to the performance of the 3 individual experts (96%, 94%, 92%), and significantly higher than the mean performance of 31 nonexperts (81%). This comprehensive analysis of computer-based plus disease suggests that it may be feasible to develop a fully-automated system based on wide-angle retinal images that performs comparably to expert graders at three-level plus disease discrimination. Computer-based image analysis, using objective and quantitative retinal vascular features, has potential to complement clinical ROP diagnosis by ophthalmologists.

  3. Computer-assisted detection of epileptiform focuses on SPECT images

    NASA Astrophysics Data System (ADS)

    Grzegorczyk, Dawid; Dunin-WÄ…sowicz, Dorota; Mulawka, Jan J.

    2010-09-01

    Epilepsy is a common nervous system disease often related to consciousness disturbances and muscular spasm which affects about 1% of the human population. Despite major technological advances done in medicine in the last years there was no sufficient progress towards overcoming it. Application of advanced statistical methods and computer image analysis offers the hope for accurate detection and later removal of an epileptiform focuses which are the cause of some types of epilepsy. The aim of this work was to create a computer system that would help to find and diagnose disorders of blood circulation in the brain This may be helpful for the diagnosis of the epileptic seizures onset in the brain.

  4. Greater anterior insula activation during anticipation of food images in women recovered from anorexia nervosa versus controls

    PubMed Central

    Oberndorfer, Tyson; Simmons, Alan; McCurdy, Danyale; Strigo, Irina; Matthews, Scott; Yang, Tony; Irvine, Zoe; Kaye, Walter

    2013-01-01

    Individuals with anorexia nervosa (AN) restrict food consumption and become severely emaciated. Eating food, even thinking of eating food, is often associated with heightened anxiety. However, food cue anticipation in AN is poorly understood. Fourteen women recovered from AN and 12 matched healthy control women performed an anticipation task viewing images of food and object images during functional magnetic resonance imaging. Comparing anticipation of food versus object images between control women and recovered AN groups showed significant interaction only in the right ventral anterior insula, with greater activation in recovered AN anticipating food images. These data support the hypothesis of a disconnect between anticipating and experiencing food stimuli in recovered AN. Insula activation positively correlated with pleasantness ratings of palatable foods in control women, while no such relationship existed in recovered AN, which is further evidence of altered interoceptive function. Finally, these findings raise the possibility that enhanced anterior insula anticipatory response to food cues in recovered AN could contribute to exaggerated sensitivity and anxiety related to food and eating. PMID:23993362

  5. Local reconstruction in computed tomography of diffraction enhanced imaging

    NASA Astrophysics Data System (ADS)

    Huang, Zhi-Feng; Zhang, Li; Kang, Ke-Jun; Chen, Zhi-Qiang; Zhu, Pei-Ping; Yuan, Qing-Xi; Huang, Wan-Xia

    2007-07-01

    Computed tomography of diffraction enhanced imaging (DEI-CT) based on synchrotron radiation source has extremely high sensitivity of weakly absorbing low-Z samples in medical and biological fields. The authors propose a modified backprojection filtration(BPF)-type algorithm based on PI-line segments to reconstruct region of interest from truncated refraction-angle projection data in DEI-CT. The distribution of refractive index decrement in the sample can be directly estimated from its reconstruction images, which has been proved by experiments at the Beijing Synchrotron Radiation Facility. The algorithm paves the way for local reconstruction of large-size samples by the use of DEI-CT with small field of view based on synchrotron radiation source.

  6. Empiric versus imaging guided left ventricular lead placement in cardiac resynchronization therapy (ImagingCRT): study protocol for a randomized controlled trial.

    PubMed

    Sommer, Anders; Kronborg, Mads Brix; Poulsen, Steen Hvitfeldt; Böttcher, Morten; Nørgaard, Bjarne Linde; Bouchelouche, Kirsten; Mortensen, Peter Thomas; Gerdes, Christian; Nielsen, Jens Cosedis

    2013-04-26

    Cardiac resynchronization therapy (CRT) is an established treatment in heart failure patients. However, a large proportion of patients remain nonresponsive to this pacing strategy. Left ventricular (LV) lead position is one of the main determinants of response to CRT. This study aims to clarify whether multimodality imaging guided LV lead placement improves clinical outcome after CRT. The ImagingCRT study is a prospective, randomized, patient- and assessor-blinded, two-armed trial. The study is designed to investigate the effect of imaging guided left ventricular lead positioning on a clinical composite primary endpoint comprising all-cause mortality, hospitalization for heart failure, or unchanged or worsened functional capacity (no improvement in New York Heart Association class and <10% improvement in six-minute-walk test). Imaging guided LV lead positioning is targeted to the latest activated non-scarred myocardial region by speckle tracking echocardiography, single-photon emission computed tomography, and cardiac computed tomography. Secondary endpoints include changes in LV dimensions, ejection fraction and dyssynchrony. A total of 192 patients are included in the study. Despite tremendous advances in knowledge with CRT, the proportion of patients not responding to this treatment has remained stable since the introduction of CRT. ImagingCRT is a prospective, randomized study assessing the clinical and echocardiographic effect of multimodality imaging guided LV lead placement in CRT. The results are expected to make an important contribution in the pursuit of increasing response rate to CRT. Clinicaltrials.gov identifier NCT01323686. The trial was registered March 25, 2011 and the first study subject was randomized April 11, 2011.

  7. Accelerating statistical image reconstruction algorithms for fan-beam x-ray CT using cloud computing

    NASA Astrophysics Data System (ADS)

    Srivastava, Somesh; Rao, A. Ravishankar; Sheinin, Vadim

    2011-03-01

    Statistical image reconstruction algorithms potentially offer many advantages to x-ray computed tomography (CT), e.g. lower radiation dose. But, their adoption in practical CT scanners requires extra computation power, which is traditionally provided by incorporating additional computing hardware (e.g. CPU-clusters, GPUs, FPGAs etc.) into a scanner. An alternative solution is to access the required computation power over the internet from a cloud computing service, which is orders-of-magnitude more cost-effective. This is because users only pay a small pay-as-you-go fee for the computation resources used (i.e. CPU time, storage etc.), and completely avoid purchase, maintenance and upgrade costs. In this paper, we investigate the benefits and shortcomings of using cloud computing for statistical image reconstruction. We parallelized the most time-consuming parts of our application, the forward and back projectors, using MapReduce, the standard parallelization library on clouds. From preliminary investigations, we found that a large speedup is possible at a very low cost. But, communication overheads inside MapReduce can limit the maximum speedup, and a better MapReduce implementation might become necessary in the future. All the experiments for this paper, including development and testing, were completed on the Amazon Elastic Compute Cloud (EC2) for less than $20.

  8. Image quality of low-dose CCTA in obese patients: impact of high-definition computed tomography and adaptive statistical iterative reconstruction.

    PubMed

    Gebhard, Cathérine; Fuchs, Tobias A; Fiechter, Michael; Stehli, Julia; Stähli, Barbara E; Gaemperli, Oliver; Kaufmann, Philipp A

    2013-10-01

    The accuracy of coronary computed tomography angiography (CCTA) in obese persons is compromised by increased image noise. We investigated CCTA image quality acquired on a high-definition 64-slice CT scanner using modern adaptive statistical iterative reconstruction (ASIR). Seventy overweight and obese patients (24 males; mean age 57 years, mean body mass index 33 kg/m(2)) were studied with clinically-indicated contrast enhanced CCTA. Thirty-five patients underwent a standard definition protocol with filtered backprojection reconstruction (SD-FBP) while 35 patients matched for gender, age, body mass index and coronary artery calcifications underwent a novel high definition protocol with ASIR (HD-ASIR). Segment by segment image quality was assessed using a four-point scale (1 = excellent, 2 = good, 3 = moderate, 4 = non-diagnostic) and revealed better scores for HD-ASIR compared to SD-FBP (1.5 ± 0.43 vs. 1.8 ± 0.48; p < 0.05). The smallest detectable vessel diameter was also improved, 1.0 ± 0.5 mm for HD-ASIR as compared to 1.4 ± 0.4 mm for SD-FBP (p < 0.001). Average vessel attenuation was higher for HD-ASIR (388.3 ± 109.6 versus 350.6 ± 90.3 Hounsfield Units, HU; p < 0.05), while image noise, signal-to-noise ratio and contrast-to noise ratio did not differ significantly between reconstruction protocols (p = NS). The estimated effective radiation doses were similar, 2.3 ± 0.1 and 2.5 ± 0.1 mSv (HD-ASIR vs. SD-ASIR respectively). Compared to a standard definition backprojection protocol (SD-FBP), a newer high definition scan protocol in combination with ASIR (HD-ASIR) incrementally improved image quality and visualization of distal coronary artery segments in overweight and obese individuals, without increasing image noise and radiation dose.

  9. Fast polyenergetic forward projection for image formation using OpenCL on a heterogeneous parallel computing platform.

    PubMed

    Zhou, Lili; Clifford Chao, K S; Chang, Jenghwa

    2012-11-01

    Simulated projection images of digital phantoms constructed from CT scans have been widely used for clinical and research applications but their quality and computation speed are not optimal for real-time comparison with the radiography acquired with an x-ray source of different energies. In this paper, the authors performed polyenergetic forward projections using open computing language (OpenCL) in a parallel computing ecosystem consisting of CPU and general purpose graphics processing unit (GPGPU) for fast and realistic image formation. The proposed polyenergetic forward projection uses a lookup table containing the NIST published mass attenuation coefficients (ÎĽâ•Ď) for different tissue types and photon energies ranging from 1 keV to 20 MeV. The CT images of interested sites are first segmented into different tissue types based on the CT numbers and converted to a three-dimensional attenuation phantom by linking each voxel to the corresponding tissue type in the lookup table. The x-ray source can be a radioisotope or an x-ray generator with a known spectrum described as weight w(n) for energy bin E(n). The Siddon method is used to compute the x-ray transmission line integral for E(n) and the x-ray fluence is the weighted sum of the exponential of line integral for all energy bins with added Poisson noise. To validate this method, a digital head and neck phantom constructed from the CT scan of a Rando head phantom was segmented into three (air, grayâ•white matter, and bone) regions for calculating the polyenergetic projection images for the Mohan 4 MV energy spectrum. To accelerate the calculation, the authors partitioned the workloads using the task parallelism and data parallelism and scheduled them in a parallel computing ecosystem consisting of CPU and GPGPU (NVIDIA Tesla C2050) using OpenCL only. The authors explored the task overlapping strategy and the sequential method for generating the first and subsequent DRRs. A dispatcher was designed to drive

  10. Securing SIFT: Privacy-preserving Outsourcing Computation of Feature Extractions Over Encrypted Image Data.

    PubMed

    Hu, Shengshan; Wang, Qian; Wang, Jingjun; Qin, Zhan; Ren, Kui

    2016-05-13

    Advances in cloud computing have greatly motivated data owners to outsource their huge amount of personal multimedia data and/or computationally expensive tasks onto the cloud by leveraging its abundant resources for cost saving and flexibility. Despite the tremendous benefits, the outsourced multimedia data and its originated applications may reveal the data owner's private information, such as the personal identity, locations or even financial profiles. This observation has recently aroused new research interest on privacy-preserving computations over outsourced multimedia data. In this paper, we propose an effective and practical privacy-preserving computation outsourcing protocol for the prevailing scale-invariant feature transform (SIFT) over massive encrypted image data. We first show that previous solutions to this problem have either efficiency/security or practicality issues, and none can well preserve the important characteristics of the original SIFT in terms of distinctiveness and robustness. We then present a new scheme design that achieves efficiency and security requirements simultaneously with the preservation of its key characteristics, by randomly splitting the original image data, designing two novel efficient protocols for secure multiplication and comparison, and carefully distributing the feature extraction computations onto two independent cloud servers. We both carefully analyze and extensively evaluate the security and effectiveness of our design. The results show that our solution is practically secure, outperforms the state-of-theart, and performs comparably to the original SIFT in terms of various characteristics, including rotation invariance, image scale invariance, robust matching across affine distortion, addition of noise and change in 3D viewpoint and illumination.

  11. SecSIFT: Privacy-preserving Outsourcing Computation of Feature Extractions Over Encrypted Image Data.

    PubMed

    Hu, Shengshan; Wang, Qian; Wang, Jingjun; Qin, Zhan; Ren, Kui

    2016-05-13

    Advances in cloud computing have greatly motivated data owners to outsource their huge amount of personal multimedia data and/or computationally expensive tasks onto the cloud by leveraging its abundant resources for cost saving and flexibility. Despite the tremendous benefits, the outsourced multimedia data and its originated applications may reveal the data owner's private information, such as the personal identity, locations or even financial profiles. This observation has recently aroused new research interest on privacy-preserving computations over outsourced multimedia data. In this paper, we propose an effective and practical privacy-preserving computation outsourcing protocol for the prevailing scale-invariant feature transform (SIFT) over massive encrypted image data. We first show that previous solutions to this problem have either efficiency/security or practicality issues, and none can well preserve the important characteristics of the original SIFT in terms of distinctiveness and robustness. We then present a new scheme design that achieves efficiency and security requirements simultaneously with the preservation of its key characteristics, by randomly splitting the original image data, designing two novel efficient protocols for secure multiplication and comparison, and carefully distributing the feature extraction computations onto two independent cloud servers. We both carefully analyze and extensively evaluate the security and effectiveness of our design. The results show that our solution is practically secure, outperforms the state-of-theart, and performs comparably to the original SIFT in terms of various characteristics, including rotation invariance, image scale invariance, robust matching across affine distortion, addition of noise and change in 3D viewpoint and illumination.

  12. Proposal for new diagnostic criteria for low skeletal muscle mass based on computed tomography imaging in Asian adults.

    PubMed

    Hamaguchi, Yuhei; Kaido, Toshimi; Okumura, Shinya; Kobayashi, Atsushi; Hammad, Ahmed; Tamai, Yumiko; Inagaki, Nobuya; Uemoto, Shinji

    2016-01-01

    Low skeletal muscle, referred to as sarcopenia, has been shown to be an independent predictor of lower overall survival in various kinds of diseases. Several studies have evaluated the low skeletal muscle mass using computed tomography (CT) imaging. However, the cutoff values based on CT imaging remain undetermined in Asian populations. Preoperative plain CT imaging at the third lumbar vertebrae level was used to measure the psoas muscle mass index (PMI, cm(2)/m(2)) in 541 adult donors for living donor liver transplantation (LDLT). We analyzed PMI distribution according to sex or donor age, and determined the sex-specific cutoff values of PMI to define low skeletal muscle mass. PMI in men was significantly higher than observed in women (8.85 ± 1.61 cm(2)/m(2) versus 5.77 ± 1.21 cm(2)/m(2); P < 0.001). PMI was significantly lower in individuals ≥50 y than in younger donors in both men and women (P < 0.001 and P < 0.001, respectively). On the basis of the younger donor data, we determined the sex-specific cutoff values for the low skeletal muscle mass were 6.36 cm(2)/m(2) for men and 3.92 cm(2)/m(2) for women (mean - 2 SD). Data from healthy young Asian adults were used to establish new criteria for low skeletal muscle mass that would be applicable for defining sarcopenia in Asian populations. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Photoacoustic Imaging for Differential Diagnosis of Benign Polyps versus Malignant Polyps of the Gallbladder: A Preliminary Study

    PubMed Central

    Chae, Hee-Dong; Jang, Jin-Young; Chang, Jin Ho; Kang, Jeeun; Kang, Mee Joo; Han, Joon Koo

    2017-01-01

    Objective To investigate the feasibility of ex vivo multispectral photoacoustic (PA) imaging in differentiating cholesterol versus neoplastic polyps, and benign versus malignant polyps, of the gallbladder. Materials and Methods A total of 38 surgically confirmed gallbladder polyps (24 cholesterol polyps, 4 adenomas, and 10 adenocarcinomas) from 38 patients were prospectively included in this study. The surgical specimens were set on a gel pad immersed in a saline-filled container. The PA intensities of polyps were then measured, using two separate wavelength intervals (421–647 nm and 692–917 nm). Mann-Whitney U test was performed for the comparison of normalized PA intensities between the cholesterol and neoplastic polyps, and between the benign and malignant polyps. Kruskal-Wallis test was conducted for the comparison of normalized PA intensities among the cholesterol polyps, adenomas, and adenocarcinomas. Results A significant difference was observed in the normalized PA intensities between the cholesterol and neoplastic polyps at 459 nm (median, 1.00 vs. 0.73; p = 0.032). Comparing the benign and malignant polyps, there were significant differences in the normalized PA intensities at 765 nm (median, 0.67 vs. 0.78; p = 0.013), 787 nm (median, 0.65 vs. 0.77; p = 0.034), and 853 nm (median, 0.59 vs. 0.85; p = 0.028). The comparison of the normalized PA intensities among cholesterol polyps, adenomas, and adenocarcinomas demonstrated marginally significant differences at 765 nm (median, 0.67 vs. 0.66 vs. 0.78, respectively; p = 0.049). Conclusion These preliminary results indicate that benign versus malignant gallbladder polyps might exhibit different spectral patterns on multispectral PA imaging. PMID:28860899

  14. Compensation for Transport Delays Produced by Computer Image Generation Systems. Cooperative Training Series.

    ERIC Educational Resources Information Center

    Ricard, G. L.; And Others

    The cooperative Navy/Air Force project described is aimed at the problem of image-flutter encountered when visual displays that present computer generated images are used for the simulation of certain flying situations. Two experiments are described which extend laboratory work on delay compensation schemes to the simulation of formation flight in…

  15. Computer-Controlled Image Anaysis of Solid Propellant Combustion Holograms Using a Quantimet 720 and a PDP-11.

    DTIC Science & Technology

    1985-09-01

    TND 1 96 PIN11. L 4. c. j;. NAVAL POSTGRADUATE SCHOOL Monterey, California NOV 19 19853 THESIS COMPUTER-CONTROLLED IMAGE ANALYSIS OF SOLID PROPELLANT...Controlled Image Analysis of Master’s Thesis Solid Propellant Combustion Holograms September, 1985 Using a Quantimet 720 and a PDP-11 S. PERFORMING ORG...unlimited Computer-Controlled Image Analysis of Solid Propellant * - Combustion Holograms Using a Quantimet 720 and a PDP-11 by Marvin Philip Shook

  16. Efficacy of 67 gallium ECT imaging in lymphoma, infection, and lung carcinoma: A comparison with planar imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harwood, S.J.; Anderson, M.W.; Klein, R.C.

    1984-01-01

    Emission computed tomography (ECT) studies were performed on a GE 400 A/T camera and ADAC computers (system 3 and system 3300). Thirty-three sets of ECT and planar images were obtained in 20 patients over a six month period. Imaging was performed 48 hours after the intravenous administration of 5 mc of Gallium 67 citrate. No bowel preparation was employed. Comparison is made of the initial nuclear medicine report derived from planar and ECT imaging aided by clinical knowledge versus the consensus opinion of two nuclear medicine physicians reading the planar images along with minimal clinical information. The lymphoma series consistsmore » of 18 scans in 10 patients. There were 5 scans in which a false negative planar interpretation was changed to a true positive ECT interpretation. Sensitivity of planar imaging for lymphoma was 58% which rose to 100% with addition of ECT information. There were no false positives by either technique. There were 5 sets of scans in 5 lung carcinoma patients. Sensitivity of the planar images was 60% because of 2 false negative results. Sensitivity of the ECT technique was 100%. There were no false positives. The infection series consists of 10 scans in 5 patients. Sensitivity of ECT was 100%, sensitivity of planar was 66%. There was 1 false positive planar. For the total series the accuracy of planar imaging was 69% and the predictive value of a negative planar interpretation was 44%. Corresponding values for ECT imaging were 100%. The authors' experience demonstrates significant increase in sensitivity without loss of specificity resulting from the use of Emission Computed Tomography in both chest and abdomen in patients with lymphoma, infection, and lung cancer.« less

  17. Using stroboscopic flow imaging to validate large-scale computational fluid dynamics simulations

    NASA Astrophysics Data System (ADS)

    Laurence, Ted A.; Ly, Sonny; Fong, Erika; Shusteff, Maxim; Randles, Amanda; Gounley, John; Draeger, Erik

    2017-02-01

    The utility and accuracy of computational modeling often requires direct validation against experimental measurements. The work presented here is motivated by taking a combined experimental and computational approach to determine the ability of large-scale computational fluid dynamics (CFD) simulations to understand and predict the dynamics of circulating tumor cells in clinically relevant environments. We use stroboscopic light sheet fluorescence imaging to track the paths and measure the velocities of fluorescent microspheres throughout a human aorta model. Performed over complex physiologicallyrealistic 3D geometries, large data sets are acquired with microscopic resolution over macroscopic distances.

  18. Free and open-source software application for the evaluation of coronary computed tomography angiography images.

    PubMed

    Hadlich, Marcelo Souza; Oliveira, Gláucia Maria Moraes; Feijóo, Raúl A; Azevedo, Clerio F; Tura, Bernardo Rangel; Ziemer, Paulo Gustavo Portela; Blanco, Pablo Javier; Pina, Gustavo; Meira, Márcio; Souza e Silva, Nelson Albuquerque de

    2012-10-01

    The standardization of images used in Medicine in 1993 was performed using the DICOM (Digital Imaging and Communications in Medicine) standard. Several tests use this standard and it is increasingly necessary to design software applications capable of handling this type of image; however, these software applications are not usually free and open-source, and this fact hinders their adjustment to most diverse interests. To develop and validate a free and open-source software application capable of handling DICOM coronary computed tomography angiography images. We developed and tested the ImageLab software in the evaluation of 100 tests randomly selected from a database. We carried out 600 tests divided between two observers using ImageLab and another software sold with Philips Brilliance computed tomography appliances in the evaluation of coronary lesions and plaques around the left main coronary artery (LMCA) and the anterior descending artery (ADA). To evaluate intraobserver, interobserver and intersoftware agreements, we used simple and kappa statistics agreements. The agreements observed between software applications were generally classified as substantial or almost perfect in most comparisons. The ImageLab software agreed with the Philips software in the evaluation of coronary computed tomography angiography tests, especially in patients without lesions, with lesions < 50% in the LMCA and < 70% in the ADA. The agreement for lesions > 70% in the ADA was lower, but this is also observed when the anatomical reference standard is used.

  19. Aerosolized intranasal midazolam for safe and effective sedation for quality computed tomography imaging in infants and children.

    PubMed

    Mekitarian Filho, Eduardo; de Carvalho, Werther Brunow; Gilio, Alfredo Elias; Robinson, Fay; Mason, Keira P

    2013-10-01

    This pilot study introduces the aerosolized route for midazolam as an option for infant and pediatric sedation for computed tomography imaging. This technique produced predictable and effective sedation for quality computed tomography imaging studies with minimal artifact and no significant adverse events. Copyright © 2013 Mosby, Inc. All rights reserved.

  20. A comparison of chemoembolization endpoints using angiographic versus transcatheter intraarterial perfusion/MR imaging monitoring.

    PubMed

    Lewandowski, Robert J; Wang, Dingxin; Gehl, James; Atassi, Bassel; Ryu, Robert K; Sato, Kent; Nemcek, Albert A; Miller, Frank H; Mulcahy, Mary F; Kulik, Laura; Larson, Andrew C; Salem, Riad; Omary, Reed A

    2007-10-01

    Transcatheter arterial chemoembolization (TACE) is an established treatment for unresectable liver cancer. This study was conducted to test the hypothesis that angiographic endpoints during TACE are measurable and reproducible by comparing subjective angiographic versus objective magnetic resonance (MR) endpoints of TACE. The study included 12 consecutive patients who presented for TACE for surgically unresectable HCC or progressive hepatic metastases despite chemotherapy. All procedures were performed with a dedicated imaging system. Angiographic series before and after TACE were reviewed independently by three board-certified interventional radiologists. A subjective angiographic chemoembolization endpoint (SACE) classification scheme, modified from an established angiographic grading system in the cardiology literature, was designed to assist in reproducibly classifying angiographic endpoints. Reproducibility in SACE classification level was compared among operators, and MR imaging perfusion reduction was compared with SACE levels for each observer. Twelve patients successfully underwent 15 separate TACE sessions. SACE levels ranged from I through IV. There was moderate agreement in SACE classification (kappa = 0.46 +/- 0.12). There was no correlation between SACE level and MR perfusion reduction (r = 0.16 for one operator and 0.02 for the other two). Angiographic endpoints during TACE vary widely, have moderate reproducibility among operators, and do not correlate with functional MR imaging perfusion endpoints. Future research should aim to determine ideal angiographic and functional MR imaging endpoints for TACE according to outcome measures such as imaging response, pathologic response, and survival.

  1. Male body image in Taiwan versus the West: Yanggang Zhiqi meets the Adonis complex.

    PubMed

    Yang, Chi-Fu Jeffrey; Gray, Peter; Pope, Harrison G

    2005-02-01

    Body image disorders appear to be more prevalent in Western than non-Western men. Previous studies by the authors have shown that young Western men display unrealistic body ideals and that Western advertising seems to place an increasing value on the male body. The authors hypothesized that Taiwanese men would exhibit less dissatisfaction with their bodies than Western men and that Taiwanese advertising would place less value on the male body than Western media. The authors administered a computerized test of body image to 55 heterosexual men in Taiwan and compared the results to those previously obtained in an identical study in the United States and Europe. Second, they counted the number of undressed male and female models in American versus Taiwanese women's magazine advertisements. In the body image study, the Taiwanese men exhibited significantly less body dissatisfaction than their Western counterparts. In the magazine study, American magazine advertisements portrayed undressed Western men frequently, but Taiwanese magazines portrayed undressed Asian men rarely. Taiwan appears less preoccupied with male body image than Western societies. This difference may reflect 1) Western traditions emphasizing muscularity and fitness as a measure of masculinity, 2) increasing exposure of Western men to muscular male bodies in media images, and 3) greater decline in traditional male roles in the West, leading to greater emphasis on the body as a measure of masculinity. These factors may explain why body dysmorphic disorder and anabolic steroid abuse are more serious problems in the West than in Taiwan.

  2. Using Computer Vision Techniques to Locate Objects in an Image

    DTIC Science & Technology

    1988-09-01

    Sujata Kakarla J. Wakeley A. S. Maida Snf DTIC SL7CTE0 ;r’!•,,/ )N ATMT~~c.N T" A TICIINICAL REPORT " SR 10 •: 1"R! _ IrIi) The Pennsylvania State...University APPLIED RESEARCH LABORATORY P. 0. Box 30 State College, PA 16804 USING COMPUTER VISION TECHNIQUES TO LOCATE OBJECTS IN AN IMAGE by Sujata Kakarla J...in an Image 12 PERSONAL AUTHOR(S) Sujata Kakarla, J. Wakelev, A. S. Maida 𔃽a TYPE OF REPORT 13b TIME COVERED 14 DATE OF REPORT (Y ar, Month, Day) 5

  3. Advanced imaging in acute stroke management-Part I: Computed tomographic.

    PubMed

    Saini, Monica; Butcher, Ken

    2009-01-01

    Neuroimaging is fundamental to stroke diagnosis and management. Non-contrast computed tomography (NCCT) has been the primary imaging modality utilized for this purpose for almost four decades. Although NCCT does permit identification of intracranial hemorrhage and parenchymal ischemic changes, insights into blood vessel patency and cerebral perfusion are limited. Advances in reperfusion strategies have made identification of potentially salvageable brain tissue a more practical concern. Advances in CT technology now permit identification of acute and chronic arterial lesions, as well as cerebral blood flow deficits. This review outlines principles of advanced CT image acquisition and its utility in acute stroke management.

  4. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    PubMed

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the

  5. Chest Computed Tomographic Image Screening for Cystic Lung Diseases in Patients with Spontaneous Pneumothorax Is Cost Effective.

    PubMed

    Gupta, Nishant; Langenderfer, Dale; McCormack, Francis X; Schauer, Daniel P; Eckman, Mark H

    2017-01-01

    Patients without a known history of lung disease presenting with a spontaneous pneumothorax are generally diagnosed as having primary spontaneous pneumothorax. However, occult diffuse cystic lung diseases such as Birt-Hogg-Dubé syndrome (BHD), lymphangioleiomyomatosis (LAM), and pulmonary Langerhans cell histiocytosis (PLCH) can also first present with a spontaneous pneumothorax, and their early identification by high-resolution computed tomographic (HRCT) chest imaging has implications for subsequent management. The objective of our study was to evaluate the cost-effectiveness of HRCT chest imaging to facilitate early diagnosis of LAM, BHD, and PLCH. We constructed a Markov state-transition model to assess the cost-effectiveness of screening HRCT to facilitate early diagnosis of diffuse cystic lung diseases in patients presenting with an apparent primary spontaneous pneumothorax. Baseline data for prevalence of BHD, LAM, and PLCH and rates of recurrent pneumothoraces in each of these diseases were derived from the literature. Costs were extracted from 2014 Medicare data. We compared a strategy of HRCT screening followed by pleurodesis in patients with LAM, BHD, or PLCH versus conventional management with no HRCT screening. In our base case analysis, screening for the presence of BHD, LAM, or PLCH in patients presenting with a spontaneous pneumothorax was cost effective, with a marginal cost-effectiveness ratio of $1,427 per quality-adjusted life-year gained. Sensitivity analysis showed that screening HRCT remained cost effective for diffuse cystic lung diseases prevalence as low as 0.01%. HRCT image screening for BHD, LAM, and PLCH in patients with apparent primary spontaneous pneumothorax is cost effective. Clinicians should consider performing a screening HRCT in patients presenting with apparent primary spontaneous pneumothorax.

  6. A comparative study of 2 computer-assisted methods of quantifying brightfield microscopy images.

    PubMed

    Tse, George H; Marson, Lorna P

    2013-10-01

    Immunohistochemistry continues to be a powerful tool for the detection of antigens. There are several commercially available software packages that allow image analysis; however, these can be complex, require relatively high level of computer skills, and can be expensive. We compared 2 commonly available software packages, Adobe Photoshop CS6 and ImageJ, in their ability to quantify percentage positive area after picrosirius red (PSR) staining and 3,3'-diaminobenzidine (DAB) staining. On analysis of DAB-stained B cells in the mouse spleen, with a biotinylated primary rat anti-mouse-B220 antibody, there was no significant difference on converting images from brightfield microscopy to binary images to measure black and white pixels using ImageJ compared with measuring a range of brown pixels with Photoshop (Student t test, P=0.243, correlation r=0.985). When analyzing mouse kidney allografts stained with PSR, Photoshop achieved a greater interquartile range while maintaining a lower 10th percentile value compared with analysis with ImageJ. A lower 10% percentile reflects that Photoshop analysis is better at analyzing tissues with low levels of positive pixels; particularly relevant for control tissues or negative controls, whereas after ImageJ analysis the same images would result in spuriously high levels of positivity. Furthermore comparing the 2 methods by Bland-Altman plot revealed that these 2 methodologies did not agree when measuring images with a higher percentage of positive staining and correlation was poor (r=0.804). We conclude that for computer-assisted analysis of images of DAB-stained tissue there is no difference between using Photoshop or ImageJ. However, for analysis of color images where differentiation into a binary pattern is not easy, such as with PSR, Photoshop is superior at identifying higher levels of positivity while maintaining differentiation of low levels of positive staining.

  7. A high performance parallel computing architecture for robust image features

    NASA Astrophysics Data System (ADS)

    Zhou, Renyan; Liu, Leibo; Wei, Shaojun

    2014-03-01

    A design of parallel architecture for image feature detection and description is proposed in this article. The major component of this architecture is a 2D cellular network composed of simple reprogrammable processors, enabling the Hessian Blob Detector and Haar Response Calculation, which are the most computing-intensive stage of the Speeded Up Robust Features (SURF) algorithm. Combining this 2D cellular network and dedicated hardware for SURF descriptors, this architecture achieves real-time image feature detection with minimal software in the host processor. A prototype FPGA implementation of the proposed architecture achieves 1318.9 GOPS general pixel processing @ 100 MHz clock and achieves up to 118 fps in VGA (640 Ă— 480) image feature detection. The proposed architecture is stand-alone and scalable so it is easy to be migrated into VLSI implementation.

  8. Evaluation of 3D airway imaging of obstructive sleep apnea with cone-beam computed tomography.

    PubMed

    Ogawa, Takumi; Enciso, Reyes; Memon, Ahmed; Mah, James K; Clark, Glenn T

    2005-01-01

    This study evaluates the use of cone-beam Computer Tomography (CT) for imaging the upper airway structure of Obstructive Sleep Apnea (OSA) patients. The total airway volume and the anteroposterior dimension of oropharyngeal airway showed significant group differences between OSA and gender-matched controls, so if we increase sample size these measurements may distinguish the two groups. We demonstrate the utility of diagnosis of anatomy with the 3D airway imaging with cone-beam Computed Tomography.

  9. Optimizing Cone Beam Computed Tomography (CBCT) System for Image Guided Radiation Therapy

    NASA Astrophysics Data System (ADS)

    Park, Chun Joo

    Cone Beam Computed Tomography (CBCT) system is the most widely used imaging device in image guided radiation therapy (IGRT), where set of 3D volumetric image of patient can be reconstructed to identify and correct position setup errors prior to the radiation treatment. This CBCT system can significantly improve precision of on-line setup errors of patient position and tumor target localization prior to the treatment. However, there are still a number of issues that needs to be investigated with CBCT system such as 1) progressively increasing defective pixels in imaging detectors by its frequent usage, 2) hazardous radiation exposure to patients during the CBCT imaging, 3) degradation of image quality due to patients' respiratory motion when CBCT is acquired and 4) unknown knowledge of certain anatomical features such as liver, due to lack of soft-tissue contrast which makes tumor motion verification challenging. In this dissertation, we explore on optimizing the use of cone beam computed tomography (CBCT) system under such circumstances. We begin by introducing general concept of IGRT. We then present the development of automated defective pixel detection algorithm for X-ray imagers that is used for CBCT imaging using wavelet analysis. We next investigate on developing fast and efficient low-dose volumetric reconstruction techniques which includes 1) fast digital tomosynthesis reconstruction using general-purpose graphics processing unit (GPGPU) programming and 2) fast low-dose CBCT image reconstruction based on the Gradient-Projection-Barzilai-Borwein formulation (GP-BB). We further developed two efficient approaches that could reduce the degradation of CBCT images from respiratory motion. First, we propose reconstructing four dimensional (4D) CBCT and DTS using respiratory signal extracted from fiducial markers implanted in liver. Second, novel motion-map constrained image reconstruction (MCIR) is proposed that allows reconstruction of high quality and high phase

  10. In person versus Computer Screening for Intimate Partner Violence Among Pregnant Patients

    PubMed Central

    Dado, Diane; Schussler, Sara; Hawker, Lynn; Holland, Cynthia L.; Burke, Jessica G.; Cluss, Patricia A.

    2012-01-01

    Objective To compare in person versus computerized screening for intimate partner violence (IPV) in a hospital-based prenatal clinic and explore women’s assessment of the screening methods. Methods We compared patient IPV disclosures on a computerized questionnaire to audio-taped first obstetric visits with an obstetric care provider and performed semi-structured interviews with patient participants who reported experiencing IPV. Results Two-hundred and fifty patient participants and 52 provider participants were in the study. Ninety-one (36%) patients disclosed IPV either via computer or in person. Of those who disclosed IPV, 60 (66%) disclosed via both methods, but 31 (34%) disclosed IPV via only one of the two methods. Twenty-three women returned for interviews. They recommended using both types together. While computerized screening was felt to be non-judgmental and more anonymous, in person screening allowed for tailored questioning and more emotional connection with the provider. Conclusion Computerized screening allowed disclosure without fear of immediate judgment. In person screening allows more flexibility in wording of questions regarding IPV and opportunity for interpersonal rapport. Practice Implications Both computerized or self-completed screening and in person screening is recommended. Providers should address IPV using non-judgmental, descriptive language, include assessments for psychological IPV, and repeat screening in person, even if no patient disclosure occurs via computer. PMID:22770815

  11. In person versus computer screening for intimate partner violence among pregnant patients.

    PubMed

    Chang, Judy C; Dado, Diane; Schussler, Sara; Hawker, Lynn; Holland, Cynthia L; Burke, Jessica G; Cluss, Patricia A

    2012-09-01

    To compare in person versus computerized screening for intimate partner violence (IPV) in a hospital-based prenatal clinic and explore women's assessment of the screening methods. We compared patient IPV disclosures on a computerized questionnaire to audio-taped first obstetric visits with an obstetric care provider and performed semi-structured interviews with patient participants who reported experiencing IPV. Two-hundred and fifty patient participants and 52 provider participants were in the study. Ninety-one (36%) patients disclosed IPV either via computer or in person. Of those who disclosed IPV, 60 (66%) disclosed via both methods, but 31 (34%) disclosed IPV via only one of the two methods. Twenty-three women returned for interviews. They recommended using both types together. While computerized screening was felt to be non-judgmental and more anonymous, in person screening allowed for tailored questioning and more emotional connection with the provider. Computerized screening allowed disclosure without fear of immediate judgment. In person screening allows more flexibility in wording of questions regarding IPV and opportunity for interpersonal rapport. Both computerized or self-completed screening and in person screening is recommended. Providers should address IPV using non-judgmental, descriptive language, include assessments for psychological IPV, and repeat screening in person, even if no patient disclosure occurs via computer. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  12. Ultrafast Ultrasound Imaging With Cascaded Dual-Polarity Waves.

    PubMed

    Zhang, Yang; Guo, Yuexin; Lee, Wei-Ning

    2018-04-01

    Ultrafast ultrasound imaging using plane or diverging waves, instead of focused beams, has advanced greatly the development of novel ultrasound imaging methods for evaluating tissue functions beyond anatomical information. However, the sonographic signal-to-noise ratio (SNR) of ultrafast imaging remains limited due to the lack of transmission focusing, and thus insufficient acoustic energy delivery. We hereby propose a new ultrafast ultrasound imaging methodology with cascaded dual-polarity waves (CDWs), which consists of a pulse train with positive and negative polarities. A new coding scheme and a corresponding linear decoding process were thereby designed to obtain the recovered signals with increased amplitude, thus increasing the SNR without sacrificing the frame rate. The newly designed CDW ultrafast ultrasound imaging technique achieved higher quality B-mode images than coherent plane-wave compounding (CPWC) and multiplane wave (MW) imaging in a calibration phantom, ex vivo pork belly, and in vivo human back muscle. CDW imaging shows a significant improvement in the SNR (10.71 dB versus CPWC and 7.62 dB versus MW), penetration depth (36.94% versus CPWC and 35.14% versus MW), and contrast ratio in deep regions (5.97 dB versus CPWC and 5.05 dB versus MW) without compromising other image quality metrics, such as spatial resolution and frame rate. The enhanced image qualities and ultrafast frame rates offered by CDW imaging beget great potential for various novel imaging applications.

  13. Optimized computational imaging methods for small-target sensing in lens-free holographic microscopy

    NASA Astrophysics Data System (ADS)

    Xiong, Zhen; Engle, Isaiah; Garan, Jacob; Melzer, Jeffrey E.; McLeod, Euan

    2018-02-01

    Lens-free holographic microscopy is a promising diagnostic approach because it is cost-effective, compact, and suitable for point-of-care applications, while providing high resolution together with an ultra-large field-of-view. It has been applied to biomedical sensing, where larger targets like eukaryotic cells, bacteria, or viruses can be directly imaged without labels, and smaller targets like proteins or DNA strands can be detected via scattering labels like micro- or nano-spheres. Automated image processing routines can count objects and infer target concentrations. In these sensing applications, sensitivity and specificity are critically affected by image resolution and signal-to-noise ratio (SNR). Pixel super-resolution approaches have been shown to boost resolution and SNR by synthesizing a high-resolution image from multiple, partially redundant, low-resolution images. However, there are several computational methods that can be used to synthesize the high-resolution image, and previously, it has been unclear which methods work best for the particular case of small-particle sensing. Here, we quantify the SNR achieved in small-particle sensing using regularized gradient-descent optimization method, where the regularization is based on cardinal-neighbor differences, Bayer-pattern noise reduction, or sparsity in the image. In particular, we find that gradient-descent with sparsity-based regularization works best for small-particle sensing. These computational approaches were evaluated on images acquired using a lens-free microscope that we assembled from an off-the-shelf LED array and color image sensor. Compared to other lens-free imaging systems, our hardware integration, calibration, and sample preparation are particularly simple. We believe our results will help to enable the best performance in lens-free holographic sensing.

  14. [Acquiring skills in malignant hyperthermia crisis management: comparison of high-fidelity simulation versus computer-based case study].

    PubMed

    MejĂ­a, Vilma; Gonzalez, Carlos; Delfino, Alejandro E; Altermatt, Fernando R; Corvetto, Marcia A

    The primary purpose of this study was to compare the effect of high fidelity simulation versus a computer-based case solving self-study, in skills acquisition about malignant hyperthermia on first year anesthesiology residents. After institutional ethical committee approval, 31 first year anesthesiology residents were enrolled in this prospective randomized single-blinded study. Participants were randomized to either a High Fidelity Simulation Scenario or a computer-based Case Study about malignant hyperthermia. After the intervention, all subjects' performance in was assessed through a high fidelity simulation scenario using a previously validated assessment rubric. Additionally, knowledge tests and a satisfaction survey were applied. Finally, a semi-structured interview was done to assess self-perception of reasoning process and decision-making. 28 first year residents finished successfully the study. Resident's management skill scores were globally higher in High Fidelity Simulation versus Case Study, however they were significant in 4 of the 8 performance rubric elements: recognize signs and symptoms (p = 0.025), prioritization of initial actions of management (p = 0.003), recognize complications (p = 0.025) and communication (p = 0.025). Average scores from pre- and post-test knowledge questionnaires improved from 74% to 85% in the High Fidelity Simulation group, and decreased from 78% to 75% in the Case Study group (p = 0.032). Regarding the qualitative analysis, there was no difference in factors influencing the student's process of reasoning and decision-making with both teaching strategies. Simulation-based training with a malignant hyperthermia high-fidelity scenario was superior to computer-based case study, improving knowledge and skills in malignant hyperthermia crisis management, with a very good satisfaction level in anesthesia residents. Copyright © 2018 Sociedade Brasileira de Anestesiologia. Publicado por Elsevier Editora Ltda. All rights

  15. Automatic Solitary Lung Nodule Detection in Computed Tomography Images Slices

    NASA Astrophysics Data System (ADS)

    Sentana, I. W. B.; Jawas, N.; Asri, S. A.

    2018-01-01

    Lung nodule is an early indicator of some lung diseases, including lung cancer. In Computed Tomography (CT) based image, nodule is known as a shape that appears brighter than lung surrounding. This research aim to develop an application that automatically detect lung nodule in CT images. There are some steps in algorithm such as image acquisition and conversion, image binarization, lung segmentation, blob detection, and classification. Data acquisition is a step to taking image slice by slice from the original *.dicom format and then each image slices is converted into *.tif image format. Binarization that tailoring Otsu algorithm, than separated the background and foreground part of each image slices. After removing the background part, the next step is to segment part of the lung only so the nodule can localized easier. Once again Otsu algorithm is use to detect nodule blob in localized lung area. The final step is tailoring Support Vector Machine (SVM) to classify the nodule. The application has succeed detecting near round nodule with a certain threshold of size. Those detecting result shows drawback in part of thresholding size and shape of nodule that need to enhance in the next part of the research. The algorithm also cannot detect nodule that attached to wall and Lung Chanel, since it depend the searching only on colour differences.

  16. Three-dimensional image acquisition and reconstruction system on a mobile device based on computer-generated integral imaging.

    PubMed

    Erdenebat, Munkh-Uchral; Kim, Byeong-Jun; Piao, Yan-Ling; Park, Seo-Yeon; Kwon, Ki-Chul; Piao, Mei-Lan; Yoo, Kwan-Hee; Kim, Nam

    2017-10-01

    A mobile three-dimensional image acquisition and reconstruction system using a computer-generated integral imaging technique is proposed. A depth camera connected to the mobile device acquires the color and depth data of a real object simultaneously, and an elemental image array is generated based on the original three-dimensional information for the object, with lens array specifications input into the mobile device. The three-dimensional visualization of the real object is reconstructed on the mobile display through optical or digital reconstruction methods. The proposed system is implemented successfully and the experimental results certify that the system is an effective and interesting method of displaying real three-dimensional content on a mobile device.

  17. Computed Tomography Perfusion, Magnetic Resonance Imaging, and Histopathological Findings After Laparoscopic Renal Cryoablation: An In Vivo Pig Model.

    PubMed

    Nielsen, Tommy Kjærgaard; Ăstraat, Ăyvind; Graumann, Ole; Pedersen, Bodil Ginnerup; Andersen, Gratien; Høyer, Søren; Borre, Michael

    2017-08-01

    The present study investigates how computed tomography perfusion scans and magnetic resonance imaging correlates with the histopathological alterations in renal tissue after cryoablation. A total of 15 pigs were subjected to laparoscopic-assisted cryoablation on both kidneys. After intervention, each animal was randomized to a postoperative follow-up period of 1, 2, or 4 weeks, after which computed tomography perfusion and magnetic resonance imaging scans were performed. Immediately after imaging, open bilateral nephrectomy was performed allowing for histopathological examination of the cryolesions. On computed tomography perfusion and magnetic resonance imaging examinations, rim enhancement was observed in the transition zone of the cryolesion 1week after laparoscopic-assisted cryoablation. This rim enhancement was found to subside after 2 and 4 weeks of follow-up, which was consistent with the microscopic examinations revealing of fibrotic scar tissue formation in the peripheral zone of the cryolesion. On T2 magnetic resonance imaging sequences, a thin hypointense rim surrounded the cryolesion, separating it from the adjacent renal parenchyma. Microscopic examinations revealed hemorrhage and later hemosiderin located in the peripheral zone. No nodular or diffuse contrast enhancement was found in the central zone of the cryolesions at any follow-up stage on neither computed tomography perfusion nor magnetic resonance imaging. On microscopic examinations, the central zone was found to consist of coagulative necrosis 1 week after laparoscopic-assisted cryoablation, which was partially replaced by fibrotic scar tissue 4 weeks following laparoscopic-assisted cryoablation. Both computed tomography perfusion and magnetic resonance imaging found the renal collecting system to be involved at all 3 stages of follow-up, but on microscopic examination, the urothelium was found to be intact in all cases. In conclusion, cryoablation effectively destroyed renal parenchyma

  18. Associations between Chinese/Asian versus Western mass media influences and body image disturbances of young Chinese women.

    PubMed

    Jackson, Todd; Jiang, Chengcheng; Chen, Hong

    2016-06-01

    In this study, we evaluated associations of experiences with mass media imported from Western nations such as the United States versus mass media from China and other Asian countries with eating and body image disturbances of young Chinese women. Participating women (N=456) completed self-report measures of disordered eating, specific sources of appearance dissatisfaction (fatness, facial features, stature), and Western versus Chinese/Asian mass media influences. The sample was significantly more likely to report perceived pressure from, comparisons with, and preferences for physical appearance depictions in Chinese/Asian mass media than Western media. Chinese/Asian media influences also combined for more unique variance in prediction models for all disturbances except stature concerns. While experiences with Western media were related to disturbances as well, the overall impact of Chinese/Asian media influences was more prominent. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. A review of image quality assessment methods with application to computational photography

    NASA Astrophysics Data System (ADS)

    Maître, Henri

    2015-12-01

    Image quality assessment has been of major importance for several domains of the industry of image as for instance restoration or communication and coding. New application fields are opening today with the increase of embedded power in the camera and the emergence of computational photography: automatic tuning, image selection, image fusion, image data-base building, etc. We review the literature of image quality evaluation. We pay attention to the very different underlying hypotheses and results of the existing methods to approach the problem. We explain why they differ and for which applications they may be beneficial. We also underline their limits, especially for a possible use in the novel domain of computational photography. Being developed to address different objectives, they propose answers on different aspects, which make them sometimes complementary. However, they all remain limited in their capability to challenge the human expert, the said or unsaid ultimate goal. We consider the methods which are based on retrieving the parameters of a signal, mostly in spectral analysis; then we explore the more global methods to qualify the image quality in terms of noticeable defects or degradation as popular in the compression domain; in a third field the image acquisition process is considered as a channel between the source and the receiver, allowing to use the tools of the information theory and to qualify the system in terms of entropy and information capacity. However, these different approaches hardly attack the most difficult part of the task which is to measure the quality of the photography in terms of aesthetic properties. To help in addressing this problem, in between Philosophy, Biology and Psychology, we propose a brief review of the literature which addresses the problematic of qualifying Beauty, present the attempts to adapt these concepts to visual patterns and initiate a reflection on what could be done in the field of photography.

  20. Comparison of competing segmentation standards for X-ray computed topographic imaging using Lattice Boltzmann techniques

    NASA Astrophysics Data System (ADS)

    Larsen, J. D.; Schaap, M. G.

    2013-12-01

    Recent advances in computing technology and experimental techniques have made it possible to observe and characterize fluid dynamics at the micro-scale. Many computational methods exist that can adequately simulate fluid flow in porous media. Lattice Boltzmann methods provide the distinct advantage of tracking particles at the microscopic level and returning macroscopic observations. While experimental methods can accurately measure macroscopic fluid dynamics, computational efforts can be used to predict and gain insight into fluid dynamics by utilizing thin sections or computed micro-tomography (CMT) images of core sections. Although substantial effort have been made to advance non-invasive imaging methods such as CMT, fluid dynamics simulations, and microscale analysis, a true three dimensional image segmentation technique has not been developed until recently. Many competing segmentation techniques are utilized in industry and research settings with varying results. In this study lattice Boltzmann method is used to simulate stokes flow in a macroporous soil column. Two dimensional CMT images were used to reconstruct a three dimensional representation of the original sample. Six competing segmentation standards were used to binarize the CMT volumes which provide distinction between solid phase and pore space. The permeability of the reconstructed samples was calculated, with Darcy's Law, from lattice Boltzmann simulations of fluid flow in the samples. We compare simulated permeability from differing segmentation algorithms to experimental findings.

  1. Acceptance test of a commercially available software for automatic image registration of computed tomography (CT), magnetic resonance imaging (MRI) and 99mTc-methoxyisobutylisonitrile (MIBI) single-photon emission computed tomography (SPECT) brain images.

    PubMed

    Loi, Gianfranco; Dominietto, Marco; Manfredda, Irene; Mones, Eleonora; Carriero, Alessandro; Inglese, Eugenio; Krengli, Marco; Brambilla, Marco

    2008-09-01

    This note describes a method to characterize the performances of image fusion software (Syntegra) with respect to accuracy and robustness. Computed tomography (CT), magnetic resonance imaging (MRI), and single-photon emission computed tomography (SPECT) studies were acquired from two phantoms and 10 patients. Image registration was performed independently by two couples composed of one radiotherapist and one physicist by means of superposition of anatomic landmarks. Each couple performed jointly and saved the registration. The two solutions were averaged to obtain the gold standard registration. A new set of estimators was defined to identify translation and rotation errors in the coordinate axes, independently from point position in image field of view (FOV). Algorithms evaluated were local correlation (LC) for CT-MRI, normalized mutual information (MI) for CT-MRI, and CT-SPECT registrations. To evaluate accuracy, estimator values were compared to limiting values for the algorithms employed, both in phantoms and in patients. To evaluate robustness, different alignments between images taken from a sample patient were produced and registration errors determined. LC algorithm resulted accurate in CT-MRI registrations in phantoms, but exceeded limiting values in 3 of 10 patients. MI algorithm resulted accurate in CT-MRI and CT-SPECT registrations in phantoms; limiting values were exceeded in one case in CT-MRI and never reached in CT-SPECT registrations. Thus, the evaluation of robustness was restricted to the algorithm of MI both for CT-MRI and CT-SPECT registrations. The algorithm of MI proved to be robust: limiting values were not exceeded with translation perturbations up to 2.5 cm, rotation perturbations up to 10 degrees and roto-translational perturbation up to 3 cm and 5 degrees.

  2. Congruence Between Pulmonary Function and Computed Tomography Imaging Assessment of Cystic Fibrosis Severity.

    PubMed

    Rybacka, Anna; GoĹşdzik-Spychalska, Joanna; Rybacki, Adam; Piorunek, Tomasz; Batura-Gabryel, Halina; Karmelita-Katulska, Katarzyna

    2018-05-04

    In cystic fibrosis, pulmonary function tests (PFTs) and computed tomography are used to assess lung function and structure, respectively. Although both techniques of assessment are congruent there are lingering doubts about which PFTs variables show the best congruence with computed tomography scoring. In this study we addressed the issue by reinvestigating the association between PFTs variables and the score of changes seen in computed tomography scans in patients with cystic fibrosis with and without pulmonary exacerbation. This retrospective study comprised 40 patients in whom PFTs and computed tomography were performed no longer than 3 weeks apart. Images (inspiratory: 0.625 mm slice thickness, 0.625 mm interval; expiratory: 1.250 mm slice thickness, 10 mm interval) were evaluated with the Bhalla scoring system. The most frequent structural abnormality found in scans were bronchiectases and peribronchial thickening. The strongest relationship was found between the Bhalla sore and forced expiratory volume in 1 s (FEV1). The Bhalla sore also was related to forced vital capacity (FVC), FEV1/FVC ratio, residual volume (RV), and RV/total lung capacity (TLC) ratio. We conclude that lung structural data obtained from the computed tomography examination are highly congruent to lung function data. Thus, computed tomography imaging may supersede functional assessment in cases of poor compliance with spirometry procedures in the lederly or children. Computed tomography also seems more sensitive than PFTs in the assessment of cystic fibrosis progression. Moreover, in early phases of cystic fibrosis, computed tomography, due to its excellent resolution, may be irreplaceable in monitoring pulmonary damage.

  3. Localization accuracy of sphere fiducials in computed tomography images

    NASA Astrophysics Data System (ADS)

    Kobler, Jan-Philipp; DĂ­az DĂ­az, Jesus; Fitzpatrick, J. Michael; Lexow, G. Jakob; Majdani, Omid; Ortmaier, Tobias

    2014-03-01

    In recent years, bone-attached robots and microstereotactic frames have attracted increasing interest due to the promising targeting accuracy they provide. Such devices attach to a patient's skull via bone anchors, which are used as landmarks during intervention planning as well. However, as simulation results reveal, the performance of such mechanisms is limited by errors occurring during the localization of their bone anchors in preoperatively acquired computed tomography images. Therefore, it is desirable to identify the most suitable fiducials as well as the most accurate method for fiducial localization. We present experimental results of a study focusing on the fiducial localization error (FLE) of spheres. Two phantoms equipped with fiducials made from ferromagnetic steel and titanium, respectively, are used to compare two clinically available imaging modalities (multi-slice CT (MSCT) and cone-beam CT (CBCT)), three localization algorithms as well as two methods for approximating the FLE. Furthermore, the impact of cubic interpolation applied to the images is investigated. Results reveal that, generally, the achievable localization accuracy in CBCT image data is significantly higher compared to MSCT imaging. The lowest FLEs (approx. 40 ÎĽm) are obtained using spheres made from titanium, CBCT imaging, template matching based on cross correlation for localization, and interpolating the images by a factor of sixteen. Nevertheless, the achievable localization accuracy of spheres made from steel is only slightly inferior. The outcomes of the presented study will be valuable considering the optimization of future microstereotactic frame prototypes as well as the operative workflow.

  4. Clinical evaluation of CR versus plain film for neonatal ICU applications

    NASA Astrophysics Data System (ADS)

    Andriole, Katherine P.; Brasch, Robert C.; Gooding, Charles A.; Gould, Robert G.; Huang, H. K.

    1995-05-01

    The clinical utility of computed radiography (CR) versus screen-film for neonatal intensive care unit (ICU) applications is investigated. The latest versions of standard ST-V and high- resolution HR-V CR imaging plates were compared via measurements of image contrast, spatial resolution and signal-to-noise. The ST-V imaging plate was found to have equivalent spatial resolution and object detectability at a lower required dose than the HR-V, and was therefore chosen as the CR plate to use in clinical trials in which a modified film cassette containing the CR imaging plate, a conventional screen and film was utilized. For 50 portable neonatal chest examinations, plain film was subjectively compared to the perfectly matched, simultaneously obtained CR hardcopy and softcopy images. Grading of overall image quality was on a scale of one (poor) to five (excellent). Readers rated the visualization of various structures in the chest (i.e., lung parenchyma, pulmonary vasculature, tubes/lines) as well as the visualization of pathologic findings. Preliminary results indicate that the image quality of both CR soft and hardcopy are comparable to plain film and that CR may be a suitable alternative to screen-film imaging for portable neonatal chest x rays.

  5. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network

    PubMed Central

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

    2012-01-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future

  6. Two- versus three-dimensional imaging in subjects with unerupted maxillary canines.

    PubMed

    Botticelli, Susanna; Verna, Carlalberta; Cattaneo, Paolo M; Heidmann, Jens; Melsen, Birte

    2011-08-01

    The aim of this study was to evaluate whether there is any difference in the diagnostic information provided by conventional two-dimensional (2D) images or by three-dimensional (3D) cone beam computed tomography (CBCT) in subjects with unerupted maxillary canines. Twenty-seven patients (17 females and 10 males, mean age 11.8 years) undergoing orthodontic treatment with 39 impacted or retained maxillary canines were included. For each canine, two different digital image sets were obtained: (1) A 2D image set including a panoramic radiograph, a lateral cephalogram, and the available periapical radiographs with different projections and (2) A 3D image set obtained with CBCT. Both sets of images were submitted, in a single-blind randomized order, to eight dentists. A questionnaire was used to assess the position of the canine, the presence of root resorption, the difficulty of the case, treatment choice options, and the quality of the images. Data analysis was performed using the McNemar-Bowker test for paired data, Kappa statistics, and paired t-tests. The findings demonstrated a difference in the localization of the impacted canines between the two techniques, which can be explained by factors affecting the conventional 2D radiographs such as distortion, magnification, and superimposition of anatomical structures situated in different planes of space. The increased precision in the localization of the canines and the improved estimation of the space conditions in the arch obtained with CBCT resulted in a difference in diagnosis and treatment planning towards a more clinically orientated approach.

  7. Image Quality and Radiation Exposure Comparison of a Double High-Pitch Acquisition for Coronary Computed Tomography Angiography Versus Standard Retrospective Spiral Acquisition in Patients With Atrial Fibrillation.

    PubMed

    Prazeres, Carlos Eduardo Elias Dos; Magalhães, Tiago Augusto; de Castro Carneiro, Adriano Camargo; Cury, Roberto Caldeira; de Melo Moreira, Valéria; Bello, Juliana Hiromi Silva Matsumoto; Rochitte, Carlos Eduardo

    The aim of this study was to compare image quality and radiation dose of coronary computed tomography (CT) angiography performed with dual-source CT scanner using 2 different protocols in patients with atrial fibrillation. Forty-seven patients with AF underwent 2 different acquisition protocols: double high-pitch (DHP) spiral acquisition and retrospective spiral acquisition. The image quality was ranked according to a qualitative score by 2 experts: 1, no evident motion; 2, minimal motion not influencing coronary artery luminal evaluation; and 3, motion with impaired luminal evaluation. A third expert solved any disagreement. A total of 732 segments were evaluated. The DHP group (24 patients, 374 segments) showed more segments classified as score 1 than the retrospective spiral acquisition group (71.3% vs 37.4%). Image quality evaluation agreement was high between observers (Îş = 0.8). There was significantly lower radiation exposure for the DHP group (3.65 [1.29] vs 23.57 [10.32] mSv). In this original direct comparison, a DHP spiral protocol for coronary CT angiography acquisition in patients with atrial fibrillation resulted in lower radiation exposure and superior image quality compared with conventional spiral retrospective acquisition.

  8. CT triage for lung malignancy: coronal multiplanar reformation versus images in three orthogonal planes.

    PubMed

    Kusk, Martin Weber; Karstoft, Jens; Mussmann, Bo Redder

    2015-11-01

    Generation of multiplanar reformation (MPR) images has become automatic on most modern computed tomography (CT) scanners, potentially increasing the workload of the reporting radiologists. It is not always clear if this increases diagnostic performance in all clinical tasks. To assess detection performance using only coronal multiplanar reformations (MPR) when triaging patients for lung malignancies with CT compared to images in three orthogonal planes, and to evaluate performance comparison of novice and experienced readers. Retrospective study of 63 patients with suspicion of lung cancer, scanned on 64-slice multidetector computed tomography (MDCT) with images reconstructed in three planes. Coronal images were presented to four readers, two novice and two experienced. Readers decided whether the patients were suspicious for malignant disease, and indicated their confidence on a five-point scale. Sensitivity and specificity on per-patient basis was calculated with regards to a reference standard of histological diagnosis, and compared with the original report using McNemar's test. Receiver operating characteristic (ROC) curves were plotted to compare the performance of the four readers, using the area under the curve (AUC) as figure of merit. No statistically significant difference of sensitivity and specificity was found for any of the readers when compared to the original reports. ROC analysis yielded AUCs in the range of 0.92-0.93 for all readers with no significant difference. Inter-rater agreement was substantial (kappa = 0.72). Sensitivity and specificity were comparable to diagnosis using images in three planes. No significant difference was found between experienced and novice readers. © The Foundation Acta Radiologica 2014.

  9. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  10. Computer-aided classification of optical images for diagnosis of osteoarthritis in the finger joints.

    PubMed

    Zhang, Jiang; Wang, James Z; Yuan, Zhen; Sobel, Eric S; Jiang, Huabei

    2011-01-01

    This study presents a computer-aided classification method to distinguish osteoarthritis finger joints from healthy ones based on the functional images captured by x-ray guided diffuse optical tomography. Three imaging features, joint space width, optical absorption, and scattering coefficients, are employed to train a Least Squares Support Vector Machine (LS-SVM) classifier for osteoarthritis classification. The 10-fold validation results show that all osteoarthritis joints are clearly identified and all healthy joints are ruled out by the LS-SVM classifier. The best sensitivity, specificity, and overall accuracy of the classification by experienced technicians based on manual calculation of optical properties and visual examination of optical images are only 85%, 93%, and 90%, respectively. Therefore, our LS-SVM based computer-aided classification is a considerably improved method for osteoarthritis diagnosis.

  11. COMPUTED TOMOGRAPHY VERSUS PLAIN RADIOGRAM IN EVALUATION OF RESIDUAL STONES AFTER PERCUTANEOUS NEPHROLITHOTOMY OR PYELONEPHROLITHOTOMY FOR COMPLEX MULTIPLE AND BRANCHED KIDNEY STONES.

    PubMed

    Wishahi, Mohamed; Elganzoury, Hossam; Elkhouly, Amr; Kamal, Ahmed M; Badawi, Mohamed; Eseaily, Khalid; Kotb, Samir; Morsy, Mohamed

    2015-08-01

    This study compared the efficacy of computed tomography of the urinary tract (CT urography) versus plain X-ray of the urinary tract (KUB) in detection and evaluation of the significance of residual stone after percutaneous nephrolithotripsy (PCNL) or surgical pyelonephrolithotomy (SPNL) for complex branching or multiple stones in the kidney. A retrospective prospective archival cohort of 168 patients underwent PCNL or SPNL for large stag horn or multiple stones in the kidney were evaluated, they were 113 patients who underwent SPNL, and 55 patients underwent PCNL. In all patients they had KUB second day of the operation, those who had multiple kidney punctures in the PCNL procedure for multiple stones, or multiple nephrotomies in the SPNL procedure, or had a radiolucent stones had an additional imaging with CT urography. Indications for the CT urography were cases of radiolucent stones and multiple small calyceal stones detected pre-operatively. The study was conducted between March 2010 and December 2014, data weie retrospectively analyzed. Preoperatively multiple or branching stones were diagnosed with intravenous urography and CT urography. Stone size and location were mapped pre-operatively on a real-size drawing, and three dimensional computed construction images in multiple planes. All patients were informed about the advantages, disadvantages and probable complications of both PCNL and SPNL before the selection of the procedure. Patients decided the type of the surgery type by themselves and written informed consent was obtained from all patients prior to the surgery. Patients were in two groups according to the patient's preference of surgery type. Group 1 consisted of 113 patients who underwent SPNL and Group 2 consisted of 55 patients treated with PCNL. Detection of residual stones stone postoperatively using KUB and CT urography was evaluated in both groups. There was statistical significance between the two imaging methodology in detection of residual

  12. Next Generation Seismic Imaging; High Fidelity Algorithms and High-End Computing

    NASA Astrophysics Data System (ADS)

    Bevc, D.; Ortigosa, F.; Guitton, A.; Kaelin, B.

    2007-05-01

    The rich oil reserves of the Gulf of Mexico are buried in deep and ultra-deep waters up to 30,000 feet from the surface. Minerals Management Service (MMS), the federal agency in the U.S. Department of the Interior that manages the nation's oil, natural gas and other mineral resources on the outer continental shelf in federal offshore waters, estimates that the Gulf of Mexico holds 37 billion barrels of "undiscovered, conventionally recoverable" oil, which, at 50/barrel, would be worth approximately 1.85 trillion. These reserves are very difficult to find and reach due to the extreme depths. Technological advances in seismic imaging represent an opportunity to overcome this obstacle by providing more accurate models of the subsurface. Among these technological advances, Reverse Time Migration (RTM) yields the best possible images. RTM is based on the solution of the two-way acoustic wave-equation. This technique relies on the velocity model to image turning waves. These turning waves are particularly important to unravel subsalt reservoirs and delineate salt-flanks, a natural trap for oil and gas. Because it relies on an accurate velocity model, RTM opens new frontier in designing better velocity estimation algorithms. RTM has been widely recognized as the next chapter in seismic exploration, as it can overcome the limitations of current migration methods in imaging complex geologic structures that exist in the Gulf of Mexico. The chief impediment to the large-scale, routine deployment of RTM has been a lack of sufficient computer power. RTM needs thirty times the computing power used in exploration today to be commercially viable and widely usable. Therefore, advancing seismic imaging to the next level of precision poses a multi-disciplinary challenge. To overcome these challenges, the Kaleidoscope project, a partnership between Repsol YPF, Barcelona Supercomputing Center, 3DGeo Inc., and IBM brings together the necessary components of modeling, algorithms and the

  13. Plain X-ray, computed tomography and magnetic resonance imaging findings of telangiectatic osteosarcoma: a case report.

    PubMed

    Skiadas, Vasilios; Koutoulidis, Vasilios; Koureas, Andreas; Moulopoulos, Lia; Gouliamos, Athanasios

    2009-09-16

    An 18-year-old male patient presented with chronic nonspecific pain of three months located at his left proximal tibia. The patient was admitted to our department for plain X-ray, computed tomography and magnetic resonance imaging examination. Plain X-ray and computed tomography revealed a geographic lytic lesion at the medial aspect of the proximal tibia. Biopsy of the lesion showed telangiectatic osteosarcoma. Image findings of all modalities are presented.

  14. A Hybrid Soft-computing Method for Image Analysis of Digital Plantar Scanners.

    PubMed

    Razjouyan, Javad; Khayat, Omid; Siahi, Mehdi; Mansouri, Ali Alizadeh

    2013-01-01

    Digital foot scanners have been developed in recent years to yield anthropometrists digital image of insole with pressure distribution and anthropometric information. In this paper, a hybrid algorithm containing gray level spatial correlation (GLSC) histogram and Shanbag entropy is presented for analysis of scanned foot images. An evolutionary algorithm is also employed to find the optimum parameters of GLSC and transform function of the membership values. Resulting binary images as the thresholded images are undergone anthropometric measurements taking in to account the scale factor of pixel size to metric scale. The proposed method is finally applied to plantar images obtained through scanning feet of randomly selected subjects by a foot scanner system as our experimental setup described in the paper. Running computation time and the effects of GLSC parameters are investigated in the simulation results.

  15. Computer-assisted image analysis to quantify daily growth rates of broiler chickens.

    PubMed

    De Wet, L; Vranken, E; Chedad, A; Aerts, J M; Ceunen, J; Berckmans, D

    2003-09-01

    1. The objective was to investigate the possibility of detecting daily body weight changes of broiler chickens with computer-assisted image analysis. 2. The experiment included 50 broiler chickens reared under commercial conditions. Ten out of 50 chickens were randomly selected and video recorded (upper view) 18 times during the 42-d growing period. The number of surface and periphery pixels from the images was used to derive a relationship between body dimension and live weight. 3. The relative error in weight estimation, expressed in terms of the standard deviation of the residuals from image surface data was 10%, while it was found to be 15% for the image periphery data. 4. Image-processing systems could be developed to assist the farmer in making important management and marketing decisions.

  16. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  17. Segmenting root systems in xray computed tomography images using level sets

    USDA-ARS?s Scientific Manuscript database

    The segmentation of plant roots from soil and other growing mediums in xray computed tomography images is needed to effectively study the shapes of roots without excavation. However, segmentation is a challenging problem in this context because the root and non-root regions share similar features. ...

  18. Reducing image noise in computed tomography (CT) colonography: effect of an integrated circuit CT detector.

    PubMed

    Liu, Yu; Leng, Shuai; Michalak, Gregory J; Vrieze, Thomas J; Duan, Xinhui; Qu, Mingliang; Shiung, Maria M; McCollough, Cynthia H; Fletcher, Joel G

    2014-01-01

    To investigate whether the integrated circuit (IC) detector results in reduced noise in computed tomography (CT) colonography (CTC). Three hundred sixty-six consecutive patients underwent clinically indicated CTC using the same CT scanner system, except for a difference in CT detectors (IC or conventional). Image noise, patient size, and scanner radiation output (volume CT dose index) were quantitatively compared between patient cohorts using each detector system, with separate comparisons for the abdomen and pelvis. For the abdomen and pelvis, despite significantly larger patient sizes in the IC detector cohort (both P < 0.001), image noise was significantly lower (both P < 0.001), whereas volume CT dose index was unchanged (both P > 0.18). Based on the observed image noise reduction, radiation dose could alternatively be reduced by approximately 20% to result in similar levels of image noise. Computed tomography colonography images acquired using the IC detector had significantly lower noise than images acquired using the conventional detector. This noise reduction can permit further radiation dose reduction in CTC.

  19. Computer-Aided Evaluation of Blood Vessel Geometry From Acoustic Images.

    PubMed

    Lindström, Stefan B; Uhlin, Fredrik; Bjarnegård, Niclas; Gylling, Micael; Nilsson, Kamilla; Svensson, Christina; Yngman-Uhlin, Pia; Länne, Toste

    2018-04-01

    A method for computer-aided assessment of blood vessel geometries based on shape-fitting algorithms from metric vision was evaluated. Acoustic images of cross sections of the radial artery and cephalic vein were acquired, and medical practitioners used a computer application to measure the wall thickness and nominal diameter of these blood vessels with a caliper method and the shape-fitting method. The methods performed equally well for wall thickness measurements. The shape-fitting method was preferable for measuring the diameter, since it reduced systematic errors by up to 63% in the case of the cephalic vein because of its eccentricity. © 2017 by the American Institute of Ultrasound in Medicine.

  20. Personal Computer (PC) based image processing applied to fluid mechanics

    NASA Technical Reports Server (NTRS)

    Cho, Y.-C.; Mclachlan, B. G.

    1987-01-01

    A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.

  1. Computer analysis of gallbladder ultrasonic images towards recognition of pathological lesions

    NASA Astrophysics Data System (ADS)

    Ogiela, M. R.; Bodzioch, S.

    2011-06-01

    This paper presents a new approach to gallbladder ultrasonic image processing and analysis towards automatic detection and interpretation of disease symptoms on processed US images. First, in this paper, there is presented a new heuristic method of filtering gallbladder contours from images. A major stage in this filtration is to segment and section off areas occupied by the said organ. This paper provides for an inventive algorithm for the holistic extraction of gallbladder image contours, based on rank filtration, as well as on the analysis of line profile sections on tested organs. The second part concerns detecting the most important lesion symptoms of the gallbladder. Automating a process of diagnosis always comes down to developing algorithms used to analyze the object of such diagnosis and verify the occurrence of symptoms related to given affection. The methodology of computer analysis of US gallbladder images presented here is clearly utilitarian in nature and after standardising can be used as a technique for supporting the diagnostics of selected gallbladder disorders using the images of this organ.

  2. A Computational Observer For Performing Contrast-Detail Analysis Of Ultrasound Images

    NASA Astrophysics Data System (ADS)

    Lopez, H.; Loew, M. H.

    1988-06-01

    Contrast-Detail (C/D) analysis allows the quantitative determination of an imaging system's ability to display a range of varying-size targets as a function of contrast. Using this technique, a contrast-detail plot is obtained which can, in theory, be used to compare image quality from one imaging system to another. The C/D plot, however, is usually obtained by using data from human observer readings. We have shown earlier(7) that the performance of human observers in the task of threshold detection of simulated lesions embedded in random ultrasound noise is highly inaccurate and non-reproducible for untrained observers. We present an objective, computational method for the determination of the C/D curve for ultrasound images. This method utilizes digital images of the C/D phantom developed at CDRH, and lesion-detection algorithms that simulate the Bayesian approach using the likelihood function for an ideal observer. We present the results of this method, and discuss the relationship to the human observer and to the comparability of image quality between systems.

  3. Value of Image Fusion in Coronary Angiography for the Detection of Coronary Artery Bypass Grafts.

    PubMed

    Plessis, Julien; Warin Fresse, Karine; Cahouch, Zachary; Manigold, Thibaut; Letocart, Vincent; Le Gloan, Laurianne; Guyomarch, BĂ©atrice; Guerin, Patrice

    2016-06-10

    Coronary angiography is more complex in patients with coronary artery bypass grafts (CABG). Image fusion is a new technology that allows the overlay of a computed tomography (CT) three-dimension (3D) model with fluoroscopic images in real time. This single-center prospective study included 66 previous CABG patients undergoing coronary and bypass graft angiography. Image fusion coronary angiographies (fusion group, 20 patients) were compared to conventional coronary angiographies (control group, 46 patients). The fusion group included patients for whom a previous chest CT scan with contrast was available. For patients in this group, aorta and CABG were reconstructed in 3D from CT acquisitions and merged in real time with fluoroscopic images. The following parameters were compared: time needed to localize the CABG; procedure duration; air kerma (AK); dose area product (DAP); and volume of contrast media injected. Results are expressed as median. There were no significant differences between the 2 groups in patient demographics and procedure characteristics (access site, number of bypass to be found, and interventional cardiologist's experience). The time to localize CABG was significantly shorter in the fusion group (7.3 versus 12.4 minutes; P=0.002), as well as the procedure duration (20.6 versus 25.6 minutes; P=0.002), AK (610 versus 814 mGy; P=0.02), DAP (4390 versus 5922.5 cGy·cm(2); P=0.02), and volume of iodinated contrast media (85 versus 116 cc; P=0.002). 3D image fusion improves the CABG detection in coronary angiography and reduces the time necessary to localize CABG, total procedure time duration, radiation exposure, and volume of contrast media. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  4. Radiolabelled leucocyte scintigraphy versus conventional radiological imaging for the management of late, low-grade vascular prosthesis infections.

    PubMed

    Erba, P A; Leo, G; Sollini, M; Tascini, C; Boni, R; Berchiolli, R N; Menichetti, F; Ferrari, M; Lazzeri, E; Mariani, G

    2014-02-01

    In this study we evaluated the diagnostic performance of (99m)Tc-HMPAO-leucocyte ((99m)Tc-HMPAO-WBC) scintigraphy in a consecutive series of 55 patients (46 men and 9 women, mean age 71 ± 9 years, range 50 - 88 years) with a suspected late or a low-grade late vascular prosthesis infection (VPI), also comparing the diagnostic accuracy of WBC with that of other radiological imaging methods. All patients suspected of having VPI underwent clinical examination, blood tests, microbiology, US and CT, and were classified according to the Fitzgerald criteria. A final diagnosis of VPI was established in 47 of the 55 patients, with microbiological confirmation after surgical removal of the prosthesis in 36 of the 47. In the 11 patients with major contraindications to surgery, the final diagnosis was based on microbiology and clinical follow-up of at least 18 months. (99m)Tc-HMPAO-WBC planar, SPECT and SPECT/CT imaging identified VPI in 43 of 47 patients (20 of these also showed infection at extra-prosthetic sites). In the remaining eight patients without VPI, different sites of infections were found. The use of SPECT/CT images led to a significant reduction in the number of false-positive findings in 37% of patients (sensitivity and specificity 100 %, versus 85.1% and 62.5% for stand-alone SPECT). Sensitivity and specificity were 34% and 75% for US, 48.9% and 83.3% for CT, and 68.1% and 62.5% for the FitzGerald classification. Perioperative mortality was 5.5%, mid-term mortality 12%, and long-term mortality 27%. Survival rates were similar in patients treated with surgery and antimicrobial therapy compared to patients treated with antimicrobial therapy alone (61% versus 63%, respectively), while infection eradication at 12 months was significantly higher following surgery (83.3% versus 45.5%). (99m)Tc-HMPAO-WBC SPECT/CT is useful for detecting, localizing and defining the extent of graft infection in patients with late and low-grade late VPI with inconclusive

  5. Quantitative Comparison of Virtual Monochromatic Images of Dual Energy Computed Tomography Systems: Beam Hardening Artifact Correction and Variance in Computed Tomography Numbers: A Phantom Study.

    PubMed

    Wu, Rongli; Watanabe, Yoshiyuki; Satoh, Kazuhiko; Liao, Yen-Peng; Takahashi, Hiroto; Tanaka, Hisashi; Tomiyama, Noriyuki

    2018-05-21

    The aim of this study was to quantitatively compare the reduction in beam hardening artifact (BHA) and variance in computed tomography (CT) numbers of virtual monochromatic energy (VME) images obtained with 3 dual-energy computed tomography (DECT) systems at a given radiation dose. Five different iodine concentrations were scanned using dual-energy and single-energy (120 kVp) modes. The BHA and CT number variance were evaluated. For higher iodine concentrations, 40 and 80 mgI/mL, BHA on VME imaging was significantly decreased when the energy was higher than 50 keV (P = 0.003) and 60 keV (P < 0.001) for GE, higher than 80 keV (P < 0.001) and 70 keV (P = 0.002) for Siemens, and higher than 40 keV (P < 0.001) and 60 keV (P < 0.001) for Toshiba, compared with single-energy CT imaging. Virtual monochromatic energy imaging can decrease BHA and improve CT number accuracy in different dual-energy computed tomography systems, depending on energy levels and iodine concentrations.

  6. Comparison of computer tomographic volumetry versus nuclear split renal function to determine residual renal function after living kidney donation.

    PubMed

    Patankar, Khalil; Low, Ronny Su-Tong; Blakeway, Darryn; Ferrari, Paolo

    2014-07-01

    Living-donor kidney transplantation is an established practice. Traditionally a combination of renal scintigram and computed tomography (CT) is used to select the kidney that is to be harvested in each donor. To evaluate the ability of split renal volume (SRV) calculated from volumetric examination of CT images compared to nuclear split renal function (nSRF) derived from gamma camera scintigram to predict donor residual single kidney function after donor nephrectomy. This pilot study comprised a retrospective analysis of CT images and renal scintigrams from 12 subsequent live kidney donors who had at least 12 months post-donation renal function follow-up. nSRF derived from the renal scintigram, expressed as the right kidney's function in percent of the total, was 50.2 ± 3.3 (range, 44.1-54.0%) and SRV estimated following analysis of CT imaging was 49.0 ± 2.9 (range, 46.4-52.3%). Although the correlation between nSRF and SRV was moderate (R = 0.46), there was 92% agreement on the dominant kidney if a difference of <2% in nSRF versus SRV was considered. Post-donation glomerular filtration rate (GFR) by CKD-EPI formula was 92 ± 10 mL/min/1.73m2 at 1 year and the correlation between estimated GFR (eGFR) at 1 year and extrapolated single kidney eGFR adjusted by nSRF (R(2 )= 0.69, P = 0.0007) or SRV (R(2 )= 0.74, P = 0.0003) was similar. Calculation of SRV from pre-donation CT examination is a valid method to estimate nSRF with good concordance with nSRF determined by renal scintigram and could replace the latter in the assessment of potential kidney donors. © The Foundation Acta Radiologica 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  7. Image analysis in modern ophthalmology: from acquisition to computer assisted diagnosis and telemedicine

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés G.; Millán, María S.; Cristóbal, Gabriel; Gabarda, Salvador; Sorel, Michal; Sroubek, Filip

    2012-06-01

    Medical digital imaging has become a key element of modern health care procedures. It provides visual documentation and a permanent record for the patients, and most important the ability to extract information about many diseases. Modern ophthalmology thrives and develops on the advances in digital imaging and computing power. In this work we present an overview of recent image processing techniques proposed by the authors in the area of digital eye fundus photography. Our applications range from retinal image quality assessment to image restoration via blind deconvolution and visualization of structural changes in time between patient visits. All proposed within a framework for improving and assisting the medical practice and the forthcoming scenario of the information chain in telemedicine.

  8. Limited Evaluation of Image Quality Produced by a Portable Head CT Scanner (CereTom) in a Neurosurgery Centre.

    PubMed

    Abdullah, Ariz Chong; Adnan, Johari Siregar; Rahman, Noor Azman A; Palur, Ravikant

    2017-03-01

    Computed tomography (CT) is the preferred diagnostic toolkit for head and brain imaging of head injury. A recent development is the invention of a portable CT scanner that can be beneficial from a clinical point of view. To compare the quality of CT brain images produced by a fixed CT scanner and a portable CT scanner (CereTom). This work was a single-centre retrospective study of CT brain images from 112 neurosurgical patients. Hounsfield units (HUs) of the images from CereTom were measured for air, water and bone. Three assessors independently evaluated the images from the fixed CT scanner and CereTom. Streak artefacts, visualisation of lesions and grey-white matter differentiation were evaluated at three different levels (centrum semiovale, basal ganglia and middle cerebellar peduncles). Each evaluation was scored 1 (poor), 2 (average) or 3 (good) and summed up to form an ordinal reading of 3 to 9. HUs for air, water and bone from CereTom were within the recommended value by the American College of Radiology (ACR). Streak artefact evaluation scores for the fixed CT scanner was 8.54 versus 7.46 ( Z = -5.67) for CereTom at the centrum semiovale, 8.38 (SD = 1.12) versus 7.32 (SD = 1.63) at the basal ganglia and 8.21 (SD = 1.30) versus 6.97 (SD = 2.77) at the middle cerebellar peduncles. Grey-white matter differentiation showed scores of 8.27 (SD = 1.04) versus 7.21 (SD = 1.41) at the centrum semiovale, 8.26 (SD = 1.07) versus 7.00 (SD = 1.47) at the basal ganglia and 8.38 (SD = 1.11) versus 6.74 (SD = 1.55) at the middle cerebellar peduncles. Visualisation of lesions showed scores of 8.86 versus 8.21 ( Z = -4.24) at the centrum semiovale, 8.93 versus 8.18 ( Z = -5.32) at the basal ganglia and 8.79 versus 8.06 ( Z = -4.93) at the middle cerebellar peduncles. All results were significant with P -value < 0.01. Results of the study showed a significant difference in image quality produced by the fixed CT scanner and CereTom, with the latter being more inferior than the

  9. Limited Evaluation of Image Quality Produced by a Portable Head CT Scanner (CereTom) in a Neurosurgery Centre

    PubMed Central

    Abdullah, Ariz Chong; Adnan, Johari Siregar; Rahman, Noor Azman A.; Palur, Ravikant

    2017-01-01

    Introduction Computed tomography (CT) is the preferred diagnostic toolkit for head and brain imaging of head injury. A recent development is the invention of a portable CT scanner that can be beneficial from a clinical point of view. Aim To compare the quality of CT brain images produced by a fixed CT scanner and a portable CT scanner (CereTom). Methods This work was a single-centre retrospective study of CT brain images from 112 neurosurgical patients. Hounsfield units (HUs) of the images from CereTom were measured for air, water and bone. Three assessors independently evaluated the images from the fixed CT scanner and CereTom. Streak artefacts, visualisation of lesions and grey–white matter differentiation were evaluated at three different levels (centrum semiovale, basal ganglia and middle cerebellar peduncles). Each evaluation was scored 1 (poor), 2 (average) or 3 (good) and summed up to form an ordinal reading of 3 to 9. Results HUs for air, water and bone from CereTom were within the recommended value by the American College of Radiology (ACR). Streak artefact evaluation scores for the fixed CT scanner was 8.54 versus 7.46 (Z = â’5.67) for CereTom at the centrum semiovale, 8.38 (SD = 1.12) versus 7.32 (SD = 1.63) at the basal ganglia and 8.21 (SD = 1.30) versus 6.97 (SD = 2.77) at the middle cerebellar peduncles. Grey–white matter differentiation showed scores of 8.27 (SD = 1.04) versus 7.21 (SD = 1.41) at the centrum semiovale, 8.26 (SD = 1.07) versus 7.00 (SD = 1.47) at the basal ganglia and 8.38 (SD = 1.11) versus 6.74 (SD = 1.55) at the middle cerebellar peduncles. Visualisation of lesions showed scores of 8.86 versus 8.21 (Z = â’4.24) at the centrum semiovale, 8.93 versus 8.18 (Z = â’5.32) at the basal ganglia and 8.79 versus 8.06 (Z = â’4.93) at the middle cerebellar peduncles. All results were significant with P-value < 0.01. Conclusions Results of the study showed a significant difference in image quality produced by the fixed CT scanner and

  10. Students Computer Literacy: Perception versus Reality

    ERIC Educational Resources Information Center

    Wilkinson, Kelly

    2006-01-01

    Students believe that they are computer literate. When asked, students perceive themselves as skilled in a variety of computer applications. This research compares students' perceptions with their reality. Students did not perform well on pretests of Microsoft Office, but improved their posttests scores with instruction. The study also examined…

  11. Plain X-ray, computed tomography and magnetic resonance imaging findings of telangiectatic osteosarcoma: a case report

    PubMed Central

    Koutoulidis, Vasilios; Koureas, Andreas; Moulopoulos, Lia; Gouliamos, Athanasios

    2009-01-01

    An 18-year-old male patient presented with chronic nonspecific pain of three months located at his left proximal tibia. The patient was admitted to our department for plain X-ray, computed tomography and magnetic resonance imaging examination. Plain X-ray and computed tomography revealed a geographic lytic lesion at the medial aspect of the proximal tibia. Biopsy of the lesion showed telangiectatic osteosarcoma. Image findings of all modalities are presented. PMID:19918488

  12. Comparison of image features calculated in different dimensions for computer-aided diagnosis of lung nodules

    NASA Astrophysics Data System (ADS)

    Xu, Ye; Lee, Michael C.; Boroczky, Lilla; Cann, Aaron D.; Borczuk, Alain C.; Kawut, Steven M.; Powell, Charles A.

    2009-02-01

    Features calculated from different dimensions of images capture quantitative information of the lung nodules through one or multiple image slices. Previously published computer-aided diagnosis (CADx) systems have used either twodimensional (2D) or three-dimensional (3D) features, though there has been little systematic analysis of the relevance of the different dimensions and of the impact of combining different dimensions. The aim of this study is to determine the importance of combining features calculated in different dimensions. We have performed CADx experiments on 125 pulmonary nodules imaged using multi-detector row CT (MDCT). The CADx system computed 192 2D, 2.5D, and 3D image features of the lesions. Leave-one-out experiments were performed using five different combinations of features from different dimensions: 2D, 3D, 2.5D, 2D+3D, and 2D+3D+2.5D. The experiments were performed ten times for each group. Accuracy, sensitivity and specificity were used to evaluate the performance. Wilcoxon signed-rank tests were applied to compare the classification results from these five different combinations of features. Our results showed that 3D image features generate the best result compared with other combinations of features. This suggests one approach to potentially reducing the dimensionality of the CADx data space and the computational complexity of the system while maintaining diagnostic accuracy.

  13. Assessment methodologies and statistical issues for computer-aided diagnosis of lung nodules in computed tomography: contemporary research topics relevant to the lung image database consortium.

    PubMed

    Dodd, Lori E; Wagner, Robert F; Armato, Samuel G; McNitt-Gray, Michael F; Beiden, Sergey; Chan, Heang-Ping; Gur, David; McLennan, Geoffrey; Metz, Charles E; Petrick, Nicholas; Sahiner, Berkman; Sayre, Jim

    2004-04-01

    Cancer of the lung and bronchus is the leading fatal malignancy in the United States. Five-year survival is low, but treatment of early stage disease considerably improves chances of survival. Advances in multidetector-row computed tomography technology provide detection of smaller lung nodules and offer a potentially effective screening tool. The large number of images per exam, however, requires considerable radiologist time for interpretation and is an impediment to clinical throughput. Thus, computer-aided diagnosis (CAD) methods are needed to assist radiologists with their decision making. To promote the development of CAD methods, the National Cancer Institute formed the Lung Image Database Consortium (LIDC). The LIDC is charged with developing the consensus and standards necessary to create an image database of multidetector-row computed tomography lung images as a resource for CAD researchers. To develop such a prospective database, its potential uses must be anticipated. The ultimate applications will influence the information that must be included along with the images, the relevant measures of algorithm performance, and the number of required images. In this article we outline assessment methodologies and statistical issues as they relate to several potential uses of the LIDC database. We review methods for performance assessment and discuss issues of defining "truth" as well as the complications that arise when truth information is not available. We also discuss issues about sizing and populating a database.

  14. Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing

    NASA Technical Reports Server (NTRS)

    Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane

    2012-01-01

    Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then

  15. Signal and image processing algorithm performance in a virtual and elastic computing environment

    NASA Astrophysics Data System (ADS)

    Bennett, Kelly W.; Robertson, James

    2013-05-01

    The U.S. Army Research Laboratory (ARL) supports the development of classification, detection, tracking, and localization algorithms using multiple sensing modalities including acoustic, seismic, E-field, magnetic field, PIR, and visual and IR imaging. Multimodal sensors collect large amounts of data in support of algorithm development. The resulting large amount of data, and their associated high-performance computing needs, increases and challenges existing computing infrastructures. Purchasing computer power as a commodity using a Cloud service offers low-cost, pay-as-you-go pricing models, scalability, and elasticity that may provide solutions to develop and optimize algorithms without having to procure additional hardware and resources. This paper provides a detailed look at using a commercial cloud service provider, such as Amazon Web Services (AWS), to develop and deploy simple signal and image processing algorithms in a cloud and run the algorithms on a large set of data archived in the ARL Multimodal Signatures Database (MMSDB). Analytical results will provide performance comparisons with existing infrastructure. A discussion on using cloud computing with government data will discuss best security practices that exist within cloud services, such as AWS.

  16. Computed tomography and magnetic resonance imaging findings of intraorbital granular cell tumor (Abrikossoff's tumor): a case report.

    PubMed

    Yuan, Wei-Hsin; Lin, Tai-Chi; Lirng, Jiing-Feng; Guo, Wan-You; Chang, Fu-Pang; Ho, Donald Ming-Tak

    2016-05-13

    Granular cell tumors are rare neoplasms which can occur in any part of the body. Granular cell tumors of the orbit account for only 3 % of all granular cell tumor cases. Computed tomography and magnetic resonance imaging of the orbit have proven useful for diagnosing orbital tumors. However, the rarity of intraorbital granular cell tumors poses a significant diagnostic challenge for both clinicians and radiologists. We report a case of a 37-year-old Chinese woman with a rare intraocular granular cell tumor of her right eye presenting with diplopia, proptosis, and restriction of ocular movement. Preoperative orbital computed tomography and magnetic resonance imaging with contrast enhancement revealed an enhancing solid, ovoid, well-demarcated, retrobulbar nodule. In addition, magnetic resonance imaging features included an intraorbital tumor which was isointense relative to gray matter on T1-weighted imaging and hypointense on T2-weighted imaging. No diffusion restriction of water was noted on either axial diffusion-weighted images or apparent diffusion coefficient maps. Both computed tomography and magnetic resonance imaging features suggested an intraorbital hemangioma. However, postoperative pathology (together with immunohistochemistry) identified an intraorbital granular cell tumor. When intraorbital T2 hypointensity and free diffusion of water are observed on magnetic resonance imaging, a granular cell tumor should be included in the differential diagnosis of an intraocular tumor.

  17. 3D-Printed Tissue-Mimicking Phantoms for Medical Imaging and Computational Validation Applications

    PubMed Central

    Shahmirzadi, Danial; Li, Ronny X.; Doyle, Barry J.; Konofagou, Elisa E.; McGloughlin, Tim M.

    2014-01-01

    Abstract Abdominal aortic aneurysm (AAA) is a permanent, irreversible dilation of the distal region of the aorta. Recent efforts have focused on improved AAA screening and biomechanics-based failure prediction. Idealized and patient-specific AAA phantoms are often employed to validate numerical models and imaging modalities. To produce such phantoms, the investment casting process is frequently used, reconstructing the 3D vessel geometry from computed tomography patient scans. In this study the alternative use of 3D printing to produce phantoms is investigated. The mechanical properties of flexible 3D-printed materials are benchmarked against proven elastomers. We demonstrate the utility of this process with particular application to the emerging imaging modality of ultrasound-based pulse wave imaging, a noninvasive diagnostic methodology being developed to obtain regional vascular wall stiffness properties, differentiating normal and pathologic tissue in vivo. Phantom wall displacements under pulsatile loading conditions were observed, showing good correlation to fluid–structure interaction simulations and regions of peak wall stress predicted by finite element analysis. 3D-printed phantoms show a strong potential to improve medical imaging and computational analysis, potentially helping bridge the gap between experimental and clinical diagnostic tools. PMID:28804733

  18. Computational photoacoustic imaging with sparsity-based optimization of the initial pressure distribution

    NASA Astrophysics Data System (ADS)

    Shang, Ruibo; Archibald, Richard; Gelb, Anne; Luke, Geoffrey P.

    2018-02-01

    In photoacoustic (PA) imaging, the optical absorption can be acquired from the initial pressure distribution (IPD). An accurate reconstruction of the IPD will be very helpful for the reconstruction of the optical absorption. However, the image quality of PA imaging in scattering media is deteriorated by the acoustic diffraction, imaging artifacts, and weak PA signals. In this paper, we propose a sparsity-based optimization approach that improves the reconstruction of the IPD in PA imaging. A linear imaging forward model was set up based on time-and-delay method with the assumption that the point spread function (PSF) is spatial invariant. Then, an optimization equation was proposed with a regularization term to denote the sparsity of the IPD in a certain domain to solve this inverse problem. As a proof of principle, the approach was applied to reconstructing point objects and blood vessel phantoms. The resolution and signal-to-noise ratio (SNR) were compared between conventional back-projection and our proposed approach. Overall these results show that computational imaging can leverage the sparsity of PA images to improve the estimation of the IPD.

  19. Interpretive versus noninterpretive content in top-selling radiology textbooks: what are we teaching medical students?

    PubMed

    Webb, Emily M; Vella, Maya; Straus, Christopher M; Phelps, Andrew; Naeger, David M

    2015-04-01

    There are little data as to whether appropriate, cost effective, and safe ordering of imaging examinations are adequately taught in US medical school curricula. We sought to determine the proportion of noninterpretive content (such as appropriate ordering) versus interpretive content (such as reading a chest x-ray) in the top-selling medical student radiology textbooks. We performed an online search to identify a ranked list of the six top-selling general radiology textbooks for medical students. Each textbook was reviewed including content in the text, tables, images, figures, appendices, practice questions, question explanations, and glossaries. Individual pages of text and individual images were semiquantitatively scored on a six-level scale as to the percentage of material that was interpretive versus noninterpretive. The predominant imaging modality addressed in each was also recorded. Descriptive statistical analysis was performed. All six books had more interpretive content. On average, 1.4 pages of text focused on interpretation for every one page focused on noninterpretive content. Seventeen images/figures were dedicated to interpretive skills for every one focused on noninterpretive skills. In all books, the largest proportion of text and image content was dedicated to plain films (51.2%), with computed tomography (CT) a distant second (16%). The content on radiographs (3.1:1) and CT (1.6:1) was more interpretive than not. The current six top-selling medical student radiology textbooks contain a preponderance of material teaching image interpretation compared to material teaching noninterpretive skills, such as appropriate imaging examination selection, rational utilization, and patient safety. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  20. Intraoperative cone-beam computed tomography and multi-slice computed tomography in temporal bone imaging for surgical treatment.

    PubMed

    Erovic, Boban M; Chan, Harley H L; Daly, Michael J; Pothier, David D; Yu, Eugene; Coulson, Chris; Lai, Philip; Irish, Jonathan C

    2014-01-01

    Conventional computed tomography (CT) imaging is the standard imaging technique for temporal bone diseases, whereas cone-beam CT (CBCT) imaging is a very fast imaging tool with a significant less radiation dose compared with conventional CT. We hypothesize that a system for intraoperative cone-beam CT provides comparable image quality to diagnostic CT for identifying temporal bone anatomical landmarks in cadaveric specimens. Cross-sectional study. University tertiary care facility. Twenty cadaveric temporal bones were affixed into a head phantom and scanned with both a prototype cone-beam CT C-arm and multislice helical CT. Imaging performance was evaluated by 3 otologic surgeons and 1 head and neck radiologist. Participants were presented images in a randomized order and completed landmark identification questionnaires covering 21 structures. CBCT and multislice CT have comparable performance in identifying temporal structures. Three otologic surgeons indicated that CBCT provided statistically equivalent performance for 19 of 21 landmarks, with CBCT superior to CT for the chorda tympani and inferior for the crura of the stapes. Subgroup analysis showed that CBCT performed superiorly for temporal bone structures compared with CT. The radiologist rated CBCT and CT as statistically equivalent for 18 of 21 landmarks, with CT superior to CBCT for the crura of stapes, chorda tympani, and sigmoid sinus. CBCT provides comparable image quality to conventional CT for temporal bone anatomical sites in cadaveric specimens. Clinical applications of low-dose CBCT imaging in surgical planning, intraoperative guidance, and postoperative assessment are promising but require further investigation.

  1. Quantitative 3-D Imaging, Segmentation and Feature Extraction of the Respiratory System in Small Mammals for Computational Biophysics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trease, Lynn L.; Trease, Harold E.; Fowler, John

    2007-03-15

    One of the critical steps toward performing computational biology simulations, using mesh based integration methods, is in using topologically faithful geometry derived from experimental digital image data as the basis for generating the computational meshes. Digital image data representations contain both the topology of the geometric features and experimental field data distributions. The geometric features that need to be captured from the digital image data are three-dimensional, therefore the process and tools we have developed work with volumetric image data represented as data-cubes. This allows us to take advantage of 2D curvature information during the segmentation and feature extraction process.more » The process is basically: 1) segmenting to isolate and enhance the contrast of the features that we wish to extract and reconstruct, 2) extracting the geometry of the features in an isosurfacing technique, and 3) building the computational mesh using the extracted feature geometry. “Quantitative” image reconstruction and feature extraction is done for the purpose of generating computational meshes, not just for producing graphics "screen" quality images. For example, the surface geometry that we extract must represent a closed water-tight surface.« less

  2. Multispectral medical image fusion in Contourlet domain for computer based diagnosis of Alzheimer's disease.

    PubMed

    Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong

    2016-07-01

    Computer based diagnosis of Alzheimer's disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer's disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).

  3. Dual source and dual detector arrays tetrahedron beam computed tomography for image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Kim, Joshua; Lu, Weiguo; Zhang, Tiezhi

    2014-02-01

    Cone-beam computed tomography (CBCT) is an important online imaging modality for image guided radiotherapy. But suboptimal image quality and the lack of a real-time stereoscopic imaging function limit its implementation in advanced treatment techniques, such as online adaptive and 4D radiotherapy. Tetrahedron beam computed tomography (TBCT) is a novel online imaging modality designed to improve on the image quality provided by CBCT. TBCT geometry is flexible, and multiple detector and source arrays can be used for different applications. In this paper, we describe a novel dual source-dual detector TBCT system that is specially designed for LINAC radiation treatment machines. The imaging system is positioned in-line with the MV beam and is composed of two linear array x-ray sources mounted aside the electrical portal imaging device and two linear arrays of x-ray detectors mounted below the machine head. The detector and x-ray source arrays are orthogonal to each other, and each pair of source and detector arrays forms a tetrahedral volume. Four planer images can be obtained from different view angles at each gantry position at a frame rate as high as 20 frames per second. The overlapped regions provide a stereoscopic field of view of approximately 10-15 cm. With a half gantry rotation, a volumetric CT image can be reconstructed having a 45 cm field of view. Due to the scatter rejecting design of the TBCT geometry, the system can potentially produce high quality 2D and 3D images with less radiation exposure. The design of the dual source-dual detector system is described, and preliminary results of studies performed on numerical phantoms and simulated patient data are presented.

  4. Modeling resident error-making patterns in detection of mammographic masses using computer-extracted image features: preliminary experiments

    NASA Astrophysics Data System (ADS)

    Mazurowski, Maciej A.; Zhang, Jing; Lo, Joseph Y.; Kuzmiak, Cherie M.; Ghate, Sujata V.; Yoon, Sora

    2014-03-01

    Providing high quality mammography education to radiology trainees is essential, as good interpretation skills potentially ensure the highest benefit of screening mammography for patients. We have previously proposed a computer-aided education system that utilizes trainee models, which relate human-assessed image characteristics to interpretation error. We proposed that these models be used to identify the most difficult and therefore the most educationally useful cases for each trainee. In this study, as a next step in our research, we propose to build trainee models that utilize features that are automatically extracted from images using computer vision algorithms. To predict error, we used a logistic regression which accepts imaging features as input and returns error as output. Reader data from 3 experts and 3 trainees were used. Receiver operating characteristic analysis was applied to evaluate the proposed trainee models. Our experiments showed that, for three trainees, our models were able to predict error better than chance. This is an important step in the development of adaptive computer-aided education systems since computer-extracted features will allow for faster and more extensive search of imaging databases in order to identify the most educationally beneficial cases.

  5. Imaging the Aqueous Humor Outflow Pathway in Human Eyes by Three-dimensional Micro-computed Tomography (3D micro-CT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C Hann; M Bentley; A Vercnocke

    2011-12-31

    The site of outflow resistance leading to elevated intraocular pressure in primary open-angle glaucoma is believed to be located in the region of Schlemm's canal inner wall endothelium, its basement membrane and the adjacent juxtacanalicular tissue. Evidence also suggests collector channels and intrascleral vessels may have a role in intraocular pressure in both normal and glaucoma eyes. Traditional imaging modalities limit the ability to view both proximal and distal portions of the trabecular outflow pathway as a single unit. In this study, we examined the effectiveness of three-dimensional micro-computed tomography (3D micro-CT) as a potential method to view the trabecularmore » outflow pathway. Two normal human eyes were used: one immersion fixed in 4% paraformaldehyde and one with anterior chamber perfusion at 10 mmHg followed by perfusion fixation in 4% paraformaldehyde/2% glutaraldehyde. Both eyes were postfixed in 1% osmium tetroxide and scanned with 3D micro-CT at 2 {mu}m or 5 {mu}m voxel resolution. In the immersion fixed eye, 24 collector channels were identified with an average orifice size of 27.5 {+-} 5 {mu}m. In comparison, the perfusion fixed eye had 29 collector channels with a mean orifice size of 40.5 {+-} 13 {mu}m. Collector channels were not evenly dispersed around the circumference of the eye. There was no significant difference in the length of Schlemm's canal in the immersed versus the perfused eye (33.2 versus 35.1 mm). Structures, locations and size measurements identified by 3D micro-CT were confirmed by correlative light microscopy. These findings confirm 3D micro-CT can be used effectively for the non-invasive examination of the trabecular meshwork, Schlemm's canal, collector channels and intrascleral vasculature that comprise the distal outflow pathway. This imaging modality will be useful for non-invasive study of the role of the trabecular outflow pathway as a whole unit.« less

  6. High resolution computational on-chip imaging of biological samples using sparsity constraint (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Rivenson, Yair; Wu, Chris; Wang, Hongda; Zhang, Yibo; Ozcan, Aydogan

    2017-03-01

    Microscopic imaging of biological samples such as pathology slides is one of the standard diagnostic methods for screening various diseases, including cancer. These biological samples are usually imaged using traditional optical microscopy tools; however, the high cost, bulkiness and limited imaging throughput of traditional microscopes partially restrict their deployment in resource-limited settings. In order to mitigate this, we previously demonstrated a cost-effective and compact lens-less on-chip microscopy platform with a wide field-of-view of >20-30 mm^2. The lens-less microscopy platform has shown its effectiveness for imaging of highly connected biological samples, such as pathology slides of various tissue samples and smears, among others. This computational holographic microscope requires a set of super-resolved holograms acquired at multiple sample-to-sensor distances, which are used as input to an iterative phase recovery algorithm and holographic reconstruction process, yielding high-resolution images of the samples in phase and amplitude channels. Here we demonstrate that in order to reconstruct clinically relevant images with high resolution and image contrast, we require less than 50% of the previously reported nominal number of holograms acquired at different sample-to-sensor distances. This is achieved by incorporating a loose sparsity constraint as part of the iterative holographic object reconstruction. We demonstrate the success of this sparsity-based computational lens-less microscopy platform by imaging pathology slides of breast cancer tissue and Papanicolaou (Pap) smears.

  7. Preliminary application of high-definition computed tomographic Gemstone Spectral Imaging in lung cancer.

    PubMed

    Wang, Guangli; Zhang, Chengqi; Li, Mingying; Deng, Kai; Li, Wei

    2014-01-01

    To evaluate the feasibility of multiparameter quantitative measurement lung cancer by Gemstone Spectral Imaging (GSI) high-definition computed tomography. Seventy-seven patients who were found to have a lung mass or a nodule by CT plain scan for the first time received chest contrast CT scan with GSI mode on high-definition computed tomography. The GSI viewer was used to display the spectral curve, iodine-based images, water-based images, and 101 sets of monochromatic images of a selected region of interest from the relative homogeneous area of the mass or nodule. Iodine concentration, water concentration, spectral curve slope, and CT values at 40 keV of the region of interest were measured. Finally, 68 eligible patients were divided into a pneumonia group (n = 24) and a malignant tumor group (n = 44, including squamous carcinoma, n = 29, and adenocarcinoma, n = 15). Significant differences existed in iodine concentration (t = 6.459), spectral curve slope (t = 6.276), and CT values at 40 keV (t = 6.698) between the pneumonia group and the malignant tumor group (P < 0.05), as well as between squamous carcinoma and adenocarcinoma (t = 6.494, 5.634, 6.091, respectively, P < 0.05), whereas water concentrations were found to have no difference between the 2 groups (t = 0.082, P > 0.05) and between the 2 types of malignant tumors (t = 1.234, P > 0.05). High-definition computed tomographic GSI technique might be helpful to differentiate lung cancer from lung benign lesions by providing qualitative and quantitative information.

  8. Maximum entropy method applied to deblurring images on a MasPar MP-1 computer

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Dorband, John; Busse, Tim

    1991-01-01

    A statistical inference method based on the principle of maximum entropy is developed for the purpose of enhancing and restoring satellite images. The proposed maximum entropy image restoration method is shown to overcome the difficulties associated with image restoration and provide the smoothest and most appropriate solution consistent with the measured data. An implementation of the method on the MP-1 computer is described, and results of tests on simulated data are presented.

  9. A New Method for Computed Tomography Angiography (CTA) Imaging via Wavelet Decomposition-Dependented Edge Matching Interpolation.

    PubMed

    Li, Zeyu; Chen, Yimin; Zhao, Yan; Zhu, Lifeng; Lv, Shengqing; Lu, Jiahui

    2016-08-01

    The interpolation technique of computed tomography angiography (CTA) image provides the ability for 3D reconstruction, as well as reduces the detect cost and the amount of radiation. However, most of the image interpolation algorithms cannot take the automation and accuracy into account. This study provides a new edge matching interpolation algorithm based on wavelet decomposition of CTA. It includes mark, scale and calculation (MSC). Combining the real clinical image data, this study mainly introduces how to search for proportional factor and use the root mean square operator to find a mean value. Furthermore, we re- synthesize the high frequency and low frequency parts of the processed image by wavelet inverse operation, and get the final interpolation image. MSC can make up for the shortage of the conventional Computed Tomography (CT) and Magnetic Resonance Imaging(MRI) examination. The radiation absorption and the time to check through the proposed synthesized image were significantly reduced. In clinical application, it can help doctor to find hidden lesions in time. Simultaneously, the patients get less economic burden as well as less radiation exposure absorbed.

  10. Computational Chemical Imaging for Cardiovascular Pathology: Chemical Microscopic Imaging Accurately Determines Cardiac Transplant Rejection

    PubMed Central

    Tiwari, Saumya; Reddy, Vijaya B.; Bhargava, Rohit; Raman, Jaishankar

    2015-01-01

    Rejection is a common problem after cardiac transplants leading to significant number of adverse events and deaths, particularly in the first year of transplantation. The gold standard to identify rejection is endomyocardial biopsy. This technique is complex, cumbersome and requires a lot of expertise in the correct interpretation of stained biopsy sections. Traditional histopathology cannot be used actively or quickly during cardiac interventions or surgery. Our objective was to develop a stain-less approach using an emerging technology, Fourier transform infrared (FT-IR) spectroscopic imaging to identify different components of cardiac tissue by their chemical and molecular basis aided by computer recognition, rather than by visual examination using optical microscopy. We studied this technique in assessment of cardiac transplant rejection to evaluate efficacy in an example of complex cardiovascular pathology. We recorded data from human cardiac transplant patients’ biopsies, used a Bayesian classification protocol and developed a visualization scheme to observe chemical differences without the need of stains or human supervision. Using receiver operating characteristic curves, we observed probabilities of detection greater than 95% for four out of five histological classes at 10% probability of false alarm at the cellular level while correctly identifying samples with the hallmarks of the immune response in all cases. The efficacy of manual examination can be significantly increased by observing the inherent biochemical changes in tissues, which enables us to achieve greater diagnostic confidence in an automated, label-free manner. We developed a computational pathology system that gives high contrast images and seems superior to traditional staining procedures. This study is a prelude to the development of real time in situ imaging systems, which can assist interventionists and surgeons actively during procedures. PMID:25932912

  11. In vivo rat deep brain imaging using photoacoustic computed tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lin, Li; Li, Lei; Zhu, Liren; Hu, Peng; Wang, Lihong V.

    2017-03-01

    The brain has been likened to a great stretch of unknown territory consisting of a number of unexplored continents. Small animal brain imaging plays an important role charting that territory. By using 1064 nm illumination from the side, we imaged the full coronal depth of rat brains in vivo. The experiment was performed using a real-time full-ring-array photoacoustic computed tomography (PACT) imaging system, which achieved an imaging depth of 11 mm and a 100 ÎĽm radial resolution. Because of the fast imaging speed of the full-ring-array PACT system, no animal motion artifact was induced. The frame rate of the system was limited by the laser repetition rate (50 Hz). In addition to anatomical imaging of the blood vessels in the brain, we continuously monitored correlations between the two brain hemispheres in one of the coronal planes. The resting states in the coronal plane were measured before and after stroke ligation surgery at a neck artery.

  12. Interesting X-ray and computed tomography images of a cervical trauma patient.

    PubMed

    Kalkan, Havva; Emlik, Ganime Dilek; Sivri, Mesut

    2016-01-01

    Patients admitted to emergency departments with loss of consciousness following trauma often have cervical vertebrae fractures and spinal cord injuries with a ratio of 5-10%. Computed tomography (CT) and radiography are important for diagnosis. The aim of this study was to describe the interesting CT and radiography findings of a patient who had C3-4 dislocation anddistraction that was called shearing injury. C3 and C4 were seperated, but there was no fracture or major vascular injuries. Images were interesting. NEXUS and Canadian Rules were also referred to for clinical evaluation. Imaging modalities, espacially reformatted CT images, make it easier to diagnose where and what the problem is.

  13. Chest Computed Tomographic Image Screening for Cystic Lung Diseases in Patients with Spontaneous Pneumothorax Is Cost Effective

    PubMed Central

    Langenderfer, Dale; McCormack, Francis X.; Schauer, Daniel P.; Eckman, Mark H.

    2017-01-01

    Rationale: Patients without a known history of lung disease presenting with a spontaneous pneumothorax are generally diagnosed as having primary spontaneous pneumothorax. However, occult diffuse cystic lung diseases such as Birt-Hogg-Dubé syndrome (BHD), lymphangioleiomyomatosis (LAM), and pulmonary Langerhans cell histiocytosis (PLCH) can also first present with a spontaneous pneumothorax, and their early identification by high-resolution computed tomographic (HRCT) chest imaging has implications for subsequent management. Objectives: The objective of our study was to evaluate the cost-effectiveness of HRCT chest imaging to facilitate early diagnosis of LAM, BHD, and PLCH. Methods: We constructed a Markov state-transition model to assess the cost-effectiveness of screening HRCT to facilitate early diagnosis of diffuse cystic lung diseases in patients presenting with an apparent primary spontaneous pneumothorax. Baseline data for prevalence of BHD, LAM, and PLCH and rates of recurrent pneumothoraces in each of these diseases were derived from the literature. Costs were extracted from 2014 Medicare data. We compared a strategy of HRCT screening followed by pleurodesis in patients with LAM, BHD, or PLCH versus conventional management with no HRCT screening. Measurements and Main Results: In our base case analysis, screening for the presence of BHD, LAM, or PLCH in patients presenting with a spontaneous pneumothorax was cost effective, with a marginal cost-effectiveness ratio of $1,427 per quality-adjusted life-year gained. Sensitivity analysis showed that screening HRCT remained cost effective for diffuse cystic lung diseases prevalence as low as 0.01%. Conclusions: HRCT image screening for BHD, LAM, and PLCH in patients with apparent primary spontaneous pneumothorax is cost effective. Clinicians should consider performing a screening HRCT in patients presenting with apparent primary spontaneous pneumothorax. PMID:27737563

  14. Computer-aided global breast MR image feature analysis for prediction of tumor response to chemotherapy: performance assessment

    NASA Astrophysics Data System (ADS)

    Aghaei, Faranak; Tan, Maxine; Hollingsworth, Alan B.; Zheng, Bin; Cheng, Samuel

    2016-03-01

    Dynamic contrast-enhanced breast magnetic resonance imaging (DCE-MRI) has been used increasingly in breast cancer diagnosis and assessment of cancer treatment efficacy. In this study, we applied a computer-aided detection (CAD) scheme to automatically segment breast regions depicting on MR images and used the kinetic image features computed from the global breast MR images acquired before neoadjuvant chemotherapy to build a new quantitative model to predict response of the breast cancer patients to the chemotherapy. To assess performance and robustness of this new prediction model, an image dataset involving breast MR images acquired from 151 cancer patients before undergoing neoadjuvant chemotherapy was retrospectively assembled and used. Among them, 63 patients had "complete response" (CR) to chemotherapy in which the enhanced contrast levels inside the tumor volume (pre-treatment) was reduced to the level as the normal enhanced background parenchymal tissues (post-treatment), while 88 patients had "partially response" (PR) in which the high contrast enhancement remain in the tumor regions after treatment. We performed the studies to analyze the correlation among the 22 global kinetic image features and then select a set of 4 optimal features. Applying an artificial neural network trained with the fusion of these 4 kinetic image features, the prediction model yielded an area under ROC curve (AUC) of 0.83+/-0.04. This study demonstrated that by avoiding tumor segmentation, which is often difficult and unreliable, fusion of kinetic image features computed from global breast MR images without tumor segmentation can also generate a useful clinical marker in predicting efficacy of chemotherapy.

  15. [Three-dimensional tooth model reconstruction based on fusion of dental computed tomography images and laser-scanned images].

    PubMed

    Zhang, Dongxia; Gan, Yangzhou; Xiong, Jing; Xia, Zeyang

    2017-02-01

    Complete three-dimensional(3D) tooth model provides essential information to assist orthodontists for diagnosis and treatment planning. Currently, 3D tooth model is mainly obtained by segmentation and reconstruction from dental computed tomography(CT) images. However, the accuracy of 3D tooth model reconstructed from dental CT images is low and not applicable for invisalign design. And another serious problem also occurs, i.e. frequentative dental CT scan during different intervals of orthodontic treatment often leads to radiation to the patients. Hence, this paper proposed a method to reconstruct tooth model based on fusion of dental CT images and laser-scanned images. A complete3 D tooth model was reconstructed with the registration and fusion between the root reconstructed from dental CT images and the crown reconstructed from laser-scanned images. The crown of the complete 3D tooth model reconstructed with the proposed method has higher accuracy. Moreover, in order to reconstruct complete 3D tooth model of each orthodontic treatment interval, only one pre-treatment CT scan is needed and in the orthodontic treatment process only the laser-scan is required. Therefore, radiation to the patients can be reduced significantly.

  16. Land classification of south-central Iowa from computer enhanced images

    NASA Technical Reports Server (NTRS)

    Lucas, J. R.; Taranik, J. V.; Billingsley, F. C. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. Enhanced LANDSAT imagery was most useful for land classification purposes, because these images could be photographically printed at large scales such as 1:63,360. The ability to see individual picture elements was no hindrance as long as general image patterns could be discerned. Low cost photographic processing systems for color printings have proved to be effective in the utilization of computer enhanced LANDSAT products for land classification purposes. The initial investment for this type of system was very low, ranging from $100 to $200 beyond a black and white photo lab. The technical expertise can be acquired from reading a color printing and processing manual.

  17. Combining computer modelling and cardiac imaging to understand right ventricular pump function.

    PubMed

    Walmsley, John; van Everdingen, Wouter; Cramer, Maarten J; Prinzen, Frits W; Delhaas, Tammo; Lumens, Joost

    2017-10-01

    Right ventricular (RV) dysfunction is a strong predictor of outcome in heart failure and is a key determinant of exercise capacity. Despite these crucial findings, the RV remains understudied in the clinical, experimental, and computer modelling literature. This review outlines how recent advances in using computer modelling and cardiac imaging synergistically help to understand RV function in health and disease. We begin by highlighting the complexity of interactions that make modelling the RV both challenging and necessary, and then summarize the multiscale modelling approaches used to date to simulate RV pump function in the context of these interactions. We go on to demonstrate how these modelling approaches in combination with cardiac imaging have improved understanding of RV pump function in pulmonary arterial hypertension, arrhythmogenic right ventricular cardiomyopathy, dyssynchronous heart failure and cardiac resynchronization therapy, hypoplastic left heart syndrome, and repaired tetralogy of Fallot. We conclude with a perspective on key issues to be addressed by computational models of the RV in the near future. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2017. For permissions, please email: journals.permissions@oup.com.

  18. Land classification of south-central Iowa from computer enhanced images

    NASA Technical Reports Server (NTRS)

    Lucas, J. R. (Principal Investigator); Taranik, J. V.; Billingsley, F. C.

    1976-01-01

    The author has identified the following significant results. The Iowa Geological Survey developed its own capability for producing color products from digitally enhanced LANDSAT data. Research showed that efficient production of enhanced images required full utilization of both computer and photographic enhancement procedures. The 29 August 1972 photo-optically enhanced color composite was more easily interpreted for land classification purposes than standard color composites.

  19. SU-F-I-45: An Automated Technique to Measure Image Contrast in Clinical CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, J; Abadi, E; Meng, B

    Purpose: To develop and validate an automated technique for measuring image contrast in chest computed tomography (CT) exams. Methods: An automated computer algorithm was developed to measure the distribution of Hounsfield units (HUs) inside four major organs: the lungs, liver, aorta, and bones. These organs were first segmented or identified using computer vision and image processing techniques. Regions of interest (ROIs) were automatically placed inside the lungs, liver, and aorta and histograms of the HUs inside the ROIs were constructed. The mean and standard deviation of each histogram were computed for each CT dataset. Comparison of the mean and standardmore » deviation of the HUs in the different organs provides different contrast values. The ROI for the bones is simply the segmentation mask of the bones. Since the histogram for bones does not follow a Gaussian distribution, the 25th and 75th percentile were computed instead of the mean. The sensitivity and accuracy of the algorithm was investigated by comparing the automated measurements with manual measurements. Fifteen contrast enhanced and fifteen non-contrast enhanced chest CT clinical datasets were examined in the validation procedure. Results: The algorithm successfully measured the histograms of the four organs in both contrast and non-contrast enhanced chest CT exams. The automated measurements were in agreement with manual measurements. The algorithm has sufficient sensitivity as indicated by the near unity slope of the automated versus manual measurement plots. Furthermore, the algorithm has sufficient accuracy as indicated by the high coefficient of determination, R2, values ranging from 0.879 to 0.998. Conclusion: Patient-specific image contrast can be measured from clinical datasets. The algorithm can be run on both contrast enhanced and non-enhanced clinical datasets. The method can be applied to automatically assess the contrast characteristics of clinical chest CT images and quantify

  20. A Review of High-Performance Computational Strategies for Modeling and Imaging of Electromagnetic Induction Data

    NASA Astrophysics Data System (ADS)

    Newman, Gregory A.

    2014-01-01

    Many geoscientific applications exploit electrostatic and electromagnetic fields to interrogate and map subsurface electrical resistivity—an important geophysical attribute for characterizing mineral, energy, and water resources. In complex three-dimensional geologies, where many of these resources remain to be found, resistivity mapping requires large-scale modeling and imaging capabilities, as well as the ability to treat significant data volumes, which can easily overwhelm single-core and modest multicore computing hardware. To treat such problems requires large-scale parallel computational resources, necessary for reducing the time to solution to a time frame acceptable to the exploration process. The recognition that significant parallel computing processes must be brought to bear on these problems gives rise to choices that must be made in parallel computing hardware and software. In this review, some of these choices are presented, along with the resulting trade-offs. We also discuss future trends in high-performance computing and the anticipated impact on electromagnetic (EM) geophysics. Topics discussed in this review article include a survey of parallel computing platforms, graphics processing units to multicore CPUs with a fast interconnect, along with effective parallel solvers and associated solver libraries effective for inductive EM modeling and imaging.