Sample records for image simulation tools

  1. Simulation tools for analyzer-based x-ray phase contrast imaging system with a conventional x-ray source

    NASA Astrophysics Data System (ADS)

    Caudevilla, Oriol; Zhou, Wei; Stoupin, Stanislav; Verman, Boris; Brankov, J. G.

    2016-09-01

    Analyzer-based X-ray phase contrast imaging (ABI) belongs to a broader family of phase-contrast (PC) X-ray imaging modalities. Unlike the conventional X-ray radiography, which measures only X-ray absorption, in PC imaging one can also measures the X-rays deflection induced by the object refractive properties. It has been shown that refraction imaging provides better contrast when imaging the soft tissue, which is of great interest in medical imaging applications. In this paper, we introduce a simulation tool specifically designed to simulate the analyzer-based X-ray phase contrast imaging system with a conventional polychromatic X-ray source. By utilizing ray tracing and basic physical principles of diffraction theory our simulation tool can predicting the X-ray beam profile shape, the energy content, the total throughput (photon count) at the detector. In addition we can evaluate imaging system point-spread function for various system configurations.

  2. Simulation of Medical Imaging Systems: Emission and Transmission Tomography

    NASA Astrophysics Data System (ADS)

    Harrison, Robert L.

    Simulation is an important tool in medical imaging research. In patient scans the true underlying anatomy and physiology is unknown. We have no way of knowing in a given scan how various factors are confounding the data: statistical noise; biological variability; patient motion; scattered radiation, dead time, and other data contaminants. Simulation allows us to isolate a single factor of interest, for instance when researchers perform multiple simulations of the same imaging situation to determine the effect of statistical noise or biological variability. Simulations are also increasingly used as a design optimization tool for tomographic scanners. This article gives an overview of the mechanics of emission and transmission tomography simulation, reviews some of the publicly available simulation tools, and discusses trade-offs between the accuracy and efficiency of simulations.

  3. Development and Validation of a Monte Carlo Simulation Tool for Multi-Pinhole SPECT

    PubMed Central

    Mok, Greta S. P.; Du, Yong; Wang, Yuchuan; Frey, Eric C.; Tsui, Benjamin M. W.

    2011-01-01

    Purpose In this work, we developed and validated a Monte Carlo simulation (MCS) tool for investigation and evaluation of multi-pinhole (MPH) SPECT imaging. Procedures This tool was based on a combination of the SimSET and MCNP codes. Photon attenuation and scatter in the object, as well as penetration and scatter through the collimator detector, are modeled in this tool. It allows accurate and efficient simulation of MPH SPECT with focused pinhole apertures and user-specified photon energy, aperture material, and imaging geometry. The MCS method was validated by comparing the point response function (PRF), detection efficiency (DE), and image profiles obtained from point sources and phantom experiments. A prototype single-pinhole collimator and focused four- and five-pinhole collimators fitted on a small animal imager were used for the experimental validations. We have also compared computational speed among various simulation tools for MPH SPECT, including SimSET-MCNP, MCNP, SimSET-GATE, and GATE for simulating projections of a hot sphere phantom. Results We found good agreement between the MCS and experimental results for PRF, DE, and image profiles, indicating the validity of the simulation method. The relative computational speeds for SimSET-MCNP, MCNP, SimSET-GATE, and GATE are 1: 2.73: 3.54: 7.34, respectively, for 120-view simulations. We also demonstrated the application of this MCS tool in small animal imaging by generating a set of low-noise MPH projection data of a 3D digital mouse whole body phantom. Conclusions The new method is useful for studying MPH collimator designs, data acquisition protocols, image reconstructions, and compensation techniques. It also has great potential to be applied for modeling the collimator-detector response with penetration and scatter effects for MPH in the quantitative reconstruction method. PMID:19779896

  4. Enhancements to the Image Analysis Tool for Core Punch Experiments and Simulations (vs. 2014)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John Edward; Unal, Cetin

    A previous paper (Hogden & Unal, 2012, Image Analysis Tool for Core Punch Experiments and Simulations) described an image processing computer program developed at Los Alamos National Laboratory. This program has proven useful so developement has been continued. In this paper we describe enhacements to the program as of 2014.

  5. Echo simulator with novel training and competency testing tools.

    PubMed

    Sheehan, Florence H; Otto, Catherine M; Freeman, Rosario V

    2013-01-01

    We developed and validated an echo simulator with three novel tools that facilitate training and enable quantitative and objective measurement of psychomotor as well as cognitive skill. First, the trainee can see original patient images - not synthetic or simulated images - that morph in real time as the mock transducer is manipulated on the mannequin. Second, augmented reality is used for Visual Guidance, a tool that assists the trainee in scanning by displaying the target organ in 3-dimensions (3D) together with the location of the current view plane and the plane of the anatomically correct view. Third, we introduce Image Matching, a tool that leverages the aptitude of the human brain for recognizing similarities and differences to help trainees learn to perform visual assessment of ultrasound images. Psychomotor competence is measured in terms of the view plane angle error. The construct validity of the simulator for competency testing was established by demonstrating its ability to discriminate novices vs. experts.

  6. TU-A-17A-02: In Memoriam of Ben Galkin: Virtual Tools for Validation of X-Ray Breast Imaging Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, K; Bakic, P; Abbey, C

    2014-06-15

    This symposium will explore simulation methods for the preclinical evaluation of novel 3D and 4D x-ray breast imaging systems – the subject of AAPM taskgroup TG234. Given the complex design of modern imaging systems, simulations offer significant advantages over long and costly clinical studies in terms of reproducibility, reduced radiation exposures, a known reference standard, and the capability for studying patient and disease subpopulations through appropriate choice of simulation parameters. Our focus will be on testing the realism of software anthropomorphic phantoms and virtual clinical trials tools developed for the optimization and validation of breast imaging systems. The symposium willmore » review the stateof- the-science, as well as the advantages and limitations of various approaches to testing realism of phantoms and simulated breast images. Approaches based upon the visual assessment of synthetic breast images by expert observers will be contrasted with approaches based upon comparing statistical properties between synthetic and clinical images. The role of observer models in the assessment of realism will be considered. Finally, an industry perspective will be presented, summarizing the role and importance of virtual tools and simulation methods in product development. The challenges and conditions that must be satisfied in order for computational modeling and simulation to play a significantly increased role in the design and evaluation of novel breast imaging systems will be addressed. Learning Objectives: Review the state-of-the science in testing realism of software anthropomorphic phantoms and virtual clinical trials tools; Compare approaches based upon the visual assessment by expert observers vs. the analysis of statistical properties of synthetic images; Discuss the role of observer models in the assessment of realism; Summarize the industry perspective to virtual methods for breast imaging.« less

  7. Preliminary validation of a new methodology for estimating dose reduction protocols in neonatal chest computed radiographs

    NASA Astrophysics Data System (ADS)

    Don, Steven; Whiting, Bruce R.; Hildebolt, Charles F.; Sehnert, W. James; Ellinwood, Jacquelyn S.; Töpfer, Karin; Masoumzadeh, Parinaz; Kraus, Richard A.; Kronemer, Keith A.; Herman, Thomas; McAlister, William H.

    2006-03-01

    The risk of radiation exposure is greatest for pediatric patients and, thus, there is a great incentive to reduce the radiation dose used in diagnostic procedures for children to "as low as reasonably achievable" (ALARA). Testing of low-dose protocols presents a dilemma, as it is unethical to repeatedly expose patients to ionizing radiation in order to determine optimum protocols. To overcome this problem, we have developed a computed-radiography (CR) dose-reduction simulation tool that takes existing images and adds synthetic noise to create realistic images that correspond to images generated with lower doses. The objective of our study was to determine the extent to which simulated, low-dose images corresponded with original (non-simulated) low-dose images. To make this determination, we created pneumothoraces of known volumes in five neonate cadavers and obtained images of the neonates at 10 mR, 1 mR and 0.1 mR (as measured at the cassette plate). The 10-mR exposures were considered "relatively-noise-free" images. We used these 10 mR-images and our simulation tool to create simulated 0.1- and 1-mR images. For the simulated and original images, we identified regions of interest (ROI) of the entire chest, free-in-air region, and liver. We compared the means and standard deviations of the ROI grey-scale values of the simulated and original images with paired t tests. We also had observers rate simulated and original images for image quality and for the presence or absence of pneumothoraces. There was no statistically significant difference in grey-scale-value means nor standard deviations between simulated and original entire chest ROI regions. The observer performance suggests that an exposure >=0.2 mR is required to detect the presence or absence of pneumothoraces. These preliminary results indicate that the use of the simulation tool is promising for achieving ALARA exposures in children.

  8. A Web simulation of medical image reconstruction and processing as an educational tool.

    PubMed

    Papamichail, Dimitrios; Pantelis, Evaggelos; Papagiannis, Panagiotis; Karaiskos, Pantelis; Georgiou, Evangelos

    2015-02-01

    Web educational resources integrating interactive simulation tools provide students with an in-depth understanding of the medical imaging process. The aim of this work was the development of a purely Web-based, open access, interactive application, as an ancillary learning tool in graduate and postgraduate medical imaging education, including a systematic evaluation of learning effectiveness. The pedagogic content of the educational Web portal was designed to cover the basic concepts of medical imaging reconstruction and processing, through the use of active learning and motivation, including learning simulations that closely resemble actual tomographic imaging systems. The user can implement image reconstruction and processing algorithms under a single user interface and manipulate various factors to understand the impact on image appearance. A questionnaire for pre- and post-training self-assessment was developed and integrated in the online application. The developed Web-based educational application introduces the trainee in the basic concepts of imaging through textual and graphical information and proceeds with a learning-by-doing approach. Trainees are encouraged to participate in a pre- and post-training questionnaire to assess their knowledge gain. An initial feedback from a group of graduate medical students showed that the developed course was considered as effective and well structured. An e-learning application on medical imaging integrating interactive simulation tools was developed and assessed in our institution.

  9. PICASSO: an end-to-end image simulation tool for space and airborne imaging systems

    NASA Astrophysics Data System (ADS)

    Cota, Steve A.; Bell, Jabin T.; Boucher, Richard H.; Dutton, Tracy E.; Florio, Chris J.; Franz, Geoffrey A.; Grycewicz, Thomas J.; Kalman, Linda S.; Keller, Robert A.; Lomheim, Terrence S.; Paulson, Diane B.; Willkinson, Timothy S.

    2008-08-01

    The design of any modern imaging system is the end result of many trade studies, each seeking to optimize image quality within real world constraints such as cost, schedule and overall risk. Image chain analysis - the prediction of image quality from fundamental design parameters - is an important part of this design process. At The Aerospace Corporation we have been using a variety of image chain analysis tools for many years, the Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) among them. In this paper we describe our PICASSO tool, showing how, starting with a high quality input image and hypothetical design descriptions representative of the current state of the art in commercial imaging satellites, PICASSO can generate standard metrics of image quality in support of the decision processes of designers and program managers alike.

  10. PICASSO: an end-to-end image simulation tool for space and airborne imaging systems

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Bell, Jabin T.; Boucher, Richard H.; Dutton, Tracy E.; Florio, Christopher J.; Franz, Geoffrey A.; Grycewicz, Thomas J.; Kalman, Linda S.; Keller, Robert A.; Lomheim, Terrence S.; Paulson, Diane B.; Wilkinson, Timothy S.

    2010-06-01

    The design of any modern imaging system is the end result of many trade studies, each seeking to optimize image quality within real world constraints such as cost, schedule and overall risk. Image chain analysis - the prediction of image quality from fundamental design parameters - is an important part of this design process. At The Aerospace Corporation we have been using a variety of image chain analysis tools for many years, the Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) among them. In this paper we describe our PICASSO tool, showing how, starting with a high quality input image and hypothetical design descriptions representative of the current state of the art in commercial imaging satellites, PICASSO can generate standard metrics of image quality in support of the decision processes of designers and program managers alike.

  11. Assessment of COTS IR image simulation tools for ATR development

    NASA Astrophysics Data System (ADS)

    Seidel, Heiko; Stahl, Christoph; Bjerkeli, Frode; Skaaren-Fystro, Paal

    2005-05-01

    Following the tendency of increased use of imaging sensors in military aircraft, future fighter pilots will need onboard artificial intelligence e.g. ATR for aiding them in image interpretation and target designation. The European Aeronautic Defence and Space Company (EADS) in Germany has developed an advanced method for automatic target recognition (ATR) which is based on adaptive neural networks. This ATR method can assist the crew of military aircraft like the Eurofighter in sensor image monitoring and thereby reduce the workload in the cockpit and increase the mission efficiency. The EADS ATR approach can be adapted for imagery of visual, infrared and SAR sensors because of the training-based classifiers of the ATR method. For the optimal adaptation of these classifiers they have to be trained with appropriate and sufficient image data. The training images must show the target objects from different aspect angles, ranges, environmental conditions, etc. Incomplete training sets lead to a degradation of classifier performance. Additionally, ground truth information i.e. scenario conditions like class type and position of targets is necessary for the optimal adaptation of the ATR method. In Summer 2003, EADS started a cooperation with Kongsberg Defence & Aerospace (KDA) from Norway. The EADS/KDA approach is to provide additional image data sets for training-based ATR through IR image simulation. The joint study aims to investigate the benefits of enhancing incomplete training sets for classifier adaptation by simulated synthetic imagery. EADS/KDA identified the requirements of a commercial-off-the-shelf IR simulation tool capable of delivering appropriate synthetic imagery for ATR development. A market study of available IR simulation tools and suppliers was performed. After that the most promising tool was benchmarked according to several criteria e.g. thermal emission model, sensor model, targets model, non-radiometric image features etc., resulting in a recommendation. The synthetic image data that are used for the investigation are generated using the recommended tool. Within the scope of this study, ATR performance on IR imagery using classifiers trained on real, synthetic and mixed image sets was evaluated. The performance of the adapted classifiers is assessed using recorded IR imagery with known ground-truth and recommendations are given for the use of COTS IR image simulation tools for ATR development.

  12. The design of real time infrared image generation software based on Creator and Vega

    NASA Astrophysics Data System (ADS)

    Wang, Rui-feng; Wu, Wei-dong; Huo, Jun-xiu

    2013-09-01

    Considering the requirement of high reality and real-time quality dynamic infrared image of an infrared image simulation, a method to design real-time infrared image simulation application on the platform of VC++ is proposed. This is based on visual simulation software Creator and Vega. The functions of Creator are introduced simply, and the main features of Vega developing environment are analyzed. The methods of infrared modeling and background are offered, the designing flow chart of the developing process of IR image real-time generation software and the functions of TMM Tool and MAT Tool and sensor module are explained, at the same time, the real-time of software is designed.

  13. Simbol-X Formation Flight and Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Civitani, M.; Djalal, S.; Le Duigou, J. M.; La Marle, O.; Chipaux, R.

    2009-05-01

    Simbol-X is the first operational mission relying on two satellites flying in formation. The dynamics of the telescope, due to the formation flight concept, raises a variety of problematic, like image reconstruction, that can be better evaluated via a simulation tools. We present here the first results obtained with Simulos, simulation tool aimed to study the relative spacecrafts navigation and the weight of the different parameters in image reconstruction and telescope performance evaluation. The simulation relies on attitude and formation flight sensors models, formation flight dynamics and control, mirror model and focal plane model, while the image reconstruction is based on the Line of Sight (LOS) concept.

  14. Integration of Irma tactical scene generator into directed-energy weapon system simulation

    NASA Astrophysics Data System (ADS)

    Owens, Monte A.; Cole, Madison B., III; Laine, Mark R.

    2003-08-01

    Integrated high-fidelity physics-based simulations that include engagement models, image generation, electro-optical hardware models and control system algorithms have previously been developed by Boeing-SVS for various tracking and pointing systems. These simulations, however, had always used images with featureless or random backgrounds and simple target geometries. With the requirement to engage tactical ground targets in the presence of cluttered backgrounds, a new type of scene generation tool was required to fully evaluate system performance in this challenging environment. To answer this need, Irma was integrated into the existing suite of Boeing-SVS simulation tools, allowing scene generation capabilities with unprecedented realism. Irma is a US Air Force research tool used for high-resolution rendering and prediction of target and background signatures. The MATLAB/Simulink-based simulation achieves closed-loop tracking by running track algorithms on the Irma-generated images, processing the track errors through optical control algorithms, and moving simulated electro-optical elements. The geometry of these elements determines the sensor orientation with respect to the Irma database containing the three-dimensional background and target models. This orientation is dynamically passed to Irma through a Simulink S-function to generate the next image. This integrated simulation provides a test-bed for development and evaluation of tracking and control algorithms against representative images including complex background environments and realistic targets calibrated using field measurements.

  15. Realistic wave-optics simulation of X-ray phase-contrast imaging at a human scale

    PubMed Central

    Sung, Yongjin; Segars, W. Paul; Pan, Adam; Ando, Masami; Sheppard, Colin J. R.; Gupta, Rajiv

    2015-01-01

    X-ray phase-contrast imaging (XPCI) can dramatically improve soft tissue contrast in X-ray medical imaging. Despite worldwide efforts to develop novel XPCI systems, a numerical framework to rigorously predict the performance of a clinical XPCI system at a human scale is not yet available. We have developed such a tool by combining a numerical anthropomorphic phantom defined with non-uniform rational B-splines (NURBS) and a wave optics-based simulator that can accurately capture the phase-contrast signal from a human-scaled numerical phantom. Using a synchrotron-based, high-performance XPCI system, we provide qualitative comparison between simulated and experimental images. Our tool can be used to simulate the performance of XPCI on various disease entities and compare proposed XPCI systems in an unbiased manner. PMID:26169570

  16. Realistic wave-optics simulation of X-ray phase-contrast imaging at a human scale

    NASA Astrophysics Data System (ADS)

    Sung, Yongjin; Segars, W. Paul; Pan, Adam; Ando, Masami; Sheppard, Colin J. R.; Gupta, Rajiv

    2015-07-01

    X-ray phase-contrast imaging (XPCI) can dramatically improve soft tissue contrast in X-ray medical imaging. Despite worldwide efforts to develop novel XPCI systems, a numerical framework to rigorously predict the performance of a clinical XPCI system at a human scale is not yet available. We have developed such a tool by combining a numerical anthropomorphic phantom defined with non-uniform rational B-splines (NURBS) and a wave optics-based simulator that can accurately capture the phase-contrast signal from a human-scaled numerical phantom. Using a synchrotron-based, high-performance XPCI system, we provide qualitative comparison between simulated and experimental images. Our tool can be used to simulate the performance of XPCI on various disease entities and compare proposed XPCI systems in an unbiased manner.

  17. The AAO fiber instrument data simulator

    NASA Astrophysics Data System (ADS)

    Goodwin, Michael; Farrell, Tony; Smedley, Scott; Heald, Ron; Heijmans, Jeroen; De Silva, Gayandhi; Carollo, Daniela

    2012-09-01

    The fiber instrument data simulator is an in-house software tool that simulates detector images of fiber-fed spectrographs developed by the Australian Astronomical Observatory (AAO). In addition to helping validate the instrument designs, the resulting simulated images are used to develop the required data reduction software. Example applications that have benefited from the tool usage are the HERMES and SAMI instrumental projects for the Anglo-Australian Telescope (AAT). Given the sophistication of these projects an end-to-end data simulator that accurately models the predicted detector images is required. The data simulator encompasses all aspects of the transmission and optical aberrations of the light path: from the science object, through the atmosphere, telescope, fibers, spectrograph and finally the camera detectors. The simulator runs under a Linux environment that uses pre-calculated information derived from ZEMAX models and processed data from MATLAB. In this paper, we discuss the aspects of the model, software, example simulations and verification.

  18. Radiometry simulation within the end-to-end simulation tool SENSOR

    NASA Astrophysics Data System (ADS)

    Wiest, Lorenz; Boerner, Anko

    2001-02-01

    12 An end-to-end simulation is a valuable tool for sensor system design, development, optimization, testing, and calibration. This contribution describes the radiometry module of the end-to-end simulation tool SENSOR. It features MODTRAN 4.0-based look up tables in conjunction with a cache-based multilinear interpolation algorithm to speed up radiometry calculations. It employs a linear reflectance parameterization to reduce look up table size, considers effects due to the topology of a digital elevation model (surface slope, sky view factor) and uses a reflectance class feature map to assign Lambertian and BRDF reflectance properties to the digital elevation model. The overall consistency of the radiometry part is demonstrated by good agreement between ATCOR 4-retrieved reflectance spectra of a simulated digital image cube and the original reflectance spectra used to simulate this image data cube.

  19. Fast scattering simulation tool for multi-energy x-ray imaging

    NASA Astrophysics Data System (ADS)

    Sossin, A.; Tabary, J.; Rebuffel, V.; Létang, J. M.; Freud, N.; Verger, L.

    2015-12-01

    A combination of Monte Carlo (MC) and deterministic approaches was employed as a means of creating a simulation tool capable of providing energy resolved x-ray primary and scatter images within a reasonable time interval. Libraries of Sindbad, a previously developed x-ray simulation software, were used in the development. The scatter simulation capabilities of the tool were validated through simulation with the aid of GATE and through experimentation by using a spectrometric CdTe detector. A simple cylindrical phantom with cavities and an aluminum insert was used. Cross-validation with GATE showed good agreement with a global spatial error of 1.5% and a maximum scatter spectrum error of around 6%. Experimental validation also supported the accuracy of the simulations obtained from the developed software with a global spatial error of 1.8% and a maximum error of around 8.5% in the scatter spectra.

  20. PICASSO: an end-to-end image simulation tool for space and airborne imaging systems II. Extension to the thermal infrared: equations and methods

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Lomheim, Terrence S.; Florio, Christopher J.; Harbold, Jeffrey M.; Muto, B. Michael; Schoolar, Richard B.; Wintz, Daniel T.; Keller, Robert A.

    2011-10-01

    In a previous paper in this series, we described how The Aerospace Corporation's Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) tool may be used to model space and airborne imaging systems operating in the visible to near-infrared (VISNIR). PICASSO is a systems-level tool, representative of a class of such tools used throughout the remote sensing community. It is capable of modeling systems over a wide range of fidelity, anywhere from conceptual design level (where it can serve as an integral part of the systems engineering process) to as-built hardware (where it can serve as part of the verification process). In the present paper, we extend the discussion of PICASSO to the modeling of Thermal Infrared (TIR) remote sensing systems, presenting the equations and methods necessary to modeling in that regime.

  1. Uranus: a rapid prototyping tool for FPGA embedded computer vision

    NASA Astrophysics Data System (ADS)

    Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.

    2007-01-01

    The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.

  2. Simulation of High-Resolution Magnetic Resonance Images on the IBM Blue Gene/L Supercomputer Using SIMRI

    DOE PAGES

    Baum, K. G.; Menezes, G.; Helguera, M.

    2011-01-01

    Medical imaging system simulators are tools that provide a means to evaluate system architecture and create artificial image sets that are appropriate for specific applications. We have modified SIMRI, a Bloch equation-based magnetic resonance image simulator, in order to successfully generate high-resolution 3D MR images of the Montreal brain phantom using Blue Gene/L systems. Results show that redistribution of the workload allows an anatomically accurate 256 3 voxel spin-echo simulation in less than 5 hours when executed on an 8192-node partition of a Blue Gene/L system.

  3. Simulation of High-Resolution Magnetic Resonance Images on the IBM Blue Gene/L Supercomputer Using SIMRI.

    PubMed

    Baum, K G; Menezes, G; Helguera, M

    2011-01-01

    Medical imaging system simulators are tools that provide a means to evaluate system architecture and create artificial image sets that are appropriate for specific applications. We have modified SIMRI, a Bloch equation-based magnetic resonance image simulator, in order to successfully generate high-resolution 3D MR images of the Montreal brain phantom using Blue Gene/L systems. Results show that redistribution of the workload allows an anatomically accurate 256(3) voxel spin-echo simulation in less than 5 hours when executed on an 8192-node partition of a Blue Gene/L system.

  4. Simulation of anisoplanatic imaging through optical turbulence using numerical wave propagation with new validation analysis

    NASA Astrophysics Data System (ADS)

    Hardie, Russell C.; Power, Jonathan D.; LeMaster, Daniel A.; Droege, Douglas R.; Gladysz, Szymon; Bose-Pillai, Santasri

    2017-07-01

    We present a numerical wave propagation method for simulating imaging of an extended scene under anisoplanatic conditions. While isoplanatic simulation is relatively common, few tools are specifically designed for simulating the imaging of extended scenes under anisoplanatic conditions. We provide a complete description of the proposed simulation tool, including the wave propagation method used. Our approach computes an array of point spread functions (PSFs) for a two-dimensional grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. The degradation includes spatially varying warping and blurring. To produce the PSF array, we generate a series of extended phase screens. Simulated point sources are numerically propagated from an array of positions on the object plane, through the phase screens, and ultimately to the focal plane of the simulated camera. Note that the optical path for each PSF will be different, and thus, pass through a different portion of the extended phase screens. These different paths give rise to a spatially varying PSF to produce anisoplanatic effects. We use a method for defining the individual phase screen statistics that we have not seen used in previous anisoplanatic simulations. We also present a validation analysis. In particular, we compare simulated outputs with the theoretical anisoplanatic tilt correlation and a derived differential tilt variance statistic. This is in addition to comparing the long- and short-exposure PSFs and isoplanatic angle. We believe this analysis represents the most thorough validation of an anisoplanatic simulation to date. The current work is also unique that we simulate and validate both constant and varying Cn2(z) profiles. Furthermore, we simulate sequences with both temporally independent and temporally correlated turbulence effects. Temporal correlation is introduced by generating even larger extended phase screens and translating this block of screens in front of the propagation area. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. Thus, we think this tool can be used effectively to study optical anisoplanatic turbulence and to aid in the development of image restoration methods.

  5. [The interactive neuroanatomical simulation and practical application of frontotemporal transsylvian exposure in neurosurgery].

    PubMed

    Balogh, Attila; Czigléczki, Gábor; Papal, Zsolt; Preul, Mark C; Banczerowski, Péter

    2014-11-30

    There is an increased need for new digital education tools in neurosurgical training. Illustrated textbooks offer anatomic and technical reference but do not substitute hands-on experience provided by surgery or cadaver dissection. Due to limited availability of cadaver dissections the need for development of simulation tools has been augmented. We explored simulation technology for producing virtual reality-like reconstructions of simulated surgical approaches on cadaver. Practical application of the simulation tool has been presented through frontotemporal transsylvian exposure. The dissections were performed on two cadaveric heads. Arteries and veins were prepared and injected with colorful silicon rubber. The heads were rigidly fixed in Mayfield headholder. A robotic microscope with two digital cameras in inverted cone method of image acquisition was used to capture images around a pivot point in several phases of dissections. Multilayered, high-resolution images have been built into interactive 4D environment by custom developed software. We have developed the simulation module of the frontotemporal transsylvian approach. The virtual specimens can be rotated or tilted to any selected angles and examined from different surgical perspectives at any stage of dissections. Important surgical issues such as appropriate head positioning or surgical maneuvers to expose deep situated neuroanatomic structures can be simulated and studied by using the module. The simulation module of the frontotemporal transsylvian exposure helps to examine effect of head positioning on the visibility of deep situated neuroanatomic structures and study surgical maneuvers required to achieve optimal exposure of deep situated anatomic structures. The simulation program is a powerful tool to study issues of preoperative planning and well suited for neurosurgical training.

  6. Monte Carlo simulation tool for online treatment monitoring in hadrontherapy with in-beam PET: A patient study.

    PubMed

    Fiorina, E; Ferrero, V; Pennazio, F; Baroni, G; Battistoni, G; Belcari, N; Cerello, P; Camarlinghi, N; Ciocca, M; Del Guerra, A; Donetti, M; Ferrari, A; Giordanengo, S; Giraudo, G; Mairani, A; Morrocchi, M; Peroni, C; Rivetti, A; Da Rocha Rolo, M D; Rossi, S; Rosso, V; Sala, P; Sportelli, G; Tampellini, S; Valvo, F; Wheadon, R; Bisogni, M G

    2018-05-07

    Hadrontherapy is a method for treating cancer with very targeted dose distributions and enhanced radiobiological effects. To fully exploit these advantages, in vivo range monitoring systems are required. These devices measure, preferably during the treatment, the secondary radiation generated by the beam-tissue interactions. However, since correlation of the secondary radiation distribution with the dose is not straightforward, Monte Carlo (MC) simulations are very important for treatment quality assessment. The INSIDE project constructed an in-beam PET scanner to detect signals generated by the positron-emitting isotopes resulting from projectile-target fragmentation. In addition, a FLUKA-based simulation tool was developed to predict the corresponding reference PET images using a detailed scanner model. The INSIDE in-beam PET was used to monitor two consecutive proton treatment sessions on a patient at the Italian Center for Oncological Hadrontherapy (CNAO). The reconstructed PET images were updated every 10 s providing a near real-time quality assessment. By half-way through the treatment, the statistics of the measured PET images were already significant enough to be compared with the simulations with average differences in the activity range less than 2.5 mm along the beam direction. Without taking into account any preferential direction, differences within 1 mm were found. In this paper, the INSIDE MC simulation tool is described and the results of the first in vivo agreement evaluation are reported. These results have justified a clinical trial, in which the MC simulation tool will be used on a daily basis to study the compliance tolerances between the measured and simulated PET images. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. Evaluation of the BreastSimulator software platform for breast tomography

    NASA Astrophysics Data System (ADS)

    Mettivier, G.; Bliznakova, K.; Sechopoulos, I.; Boone, J. M.; Di Lillo, F.; Sarno, A.; Castriconi, R.; Russo, P.

    2017-08-01

    The aim of this work was the evaluation of the software BreastSimulator, a breast x-ray imaging simulation software, as a tool for the creation of 3D uncompressed breast digital models and for the simulation and the optimization of computed tomography (CT) scanners dedicated to the breast. Eight 3D digital breast phantoms were created with glandular fractions in the range 10%-35%. The models are characterised by different sizes and modelled realistic anatomical features. X-ray CT projections were simulated for a dedicated cone-beam CT scanner and reconstructed with the FDK algorithm. X-ray projection images were simulated for 5 mono-energetic (27, 32, 35, 43 and 51 keV) and 3 poly-energetic x-ray spectra typically employed in current CT scanners dedicated to the breast (49, 60, or 80 kVp). Clinical CT images acquired from two different clinical breast CT scanners were used for comparison purposes. The quantitative evaluation included calculation of the power-law exponent, β, from simulated and real breast tomograms, based on the power spectrum fitted with a function of the spatial frequency, f, of the form S(f)  =  α/f   β . The breast models were validated by comparison against clinical breast CT and published data. We found that the calculated β coefficients were close to that of clinical CT data from a dedicated breast CT scanner and reported data in the literature. In evaluating the software package BreastSimulator to generate breast models suitable for use with breast CT imaging, we found that the breast phantoms produced with the software tool can reproduce the anatomical structure of real breasts, as evaluated by calculating the β exponent from the power spectral analysis of simulated images. As such, this research tool might contribute considerably to the further development, testing and optimisation of breast CT imaging techniques.

  8. X-ray system simulation software tools for radiology and radiography education.

    PubMed

    Kengyelics, Stephen M; Treadgold, Laura A; Davies, Andrew G

    2018-02-01

    To develop x-ray simulation software tools to support delivery of radiological science education for a range of learning environments and audiences including individual study, lectures, and tutorials. Two software tools were developed; one simulated x-ray production for a simple two dimensional radiographic system geometry comprising an x-ray source, beam filter, test object and detector. The other simulated the acquisition and display of two dimensional radiographic images of complex three dimensional objects using a ray casting algorithm through three dimensional mesh objects. Both tools were intended to be simple to use, produce results accurate enough to be useful for educational purposes, and have an acceptable simulation time on modest computer hardware. The radiographic factors and acquisition geometry could be altered in both tools via their graphical user interfaces. A comparison of radiographic contrast measurements of the simulators to a real system was performed. The contrast output of the simulators had excellent agreement with measured results. The software simulators were deployed to 120 computers on campus. The software tools developed are easy-to-use, clearly demonstrate important x-ray physics and imaging principles, are accessible within a standard University setting and could be used to enhance the teaching of x-ray physics to undergraduate students. Current approaches to teaching x-ray physics in radiological science lack immediacy when linking theory with practice. This method of delivery allows students to engage with the subject in an experiential learning environment. Copyright © 2017. Published by Elsevier Ltd.

  9. Advanced studies of electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Ling, Hao

    1994-01-01

    In radar signature applications it is often desirable to generate the range profiles and inverse synthetic aperture radar (ISAR) images of a target. They can be used either as identification tools to distinguish and classify the target from a collection of possible targets, or as diagnostic/design tools to pinpoint the key scattering centers on the target. The simulation of synthetic range profiles and ISAR images is usually a time intensive task and computation time is of prime importance. Our research has been focused on the development of fast simulation algorithms for range profiles and ISAR images using the shooting and bouncing ray (SBR) method, a high frequency electromagnetic simulation technique for predicting the radar returns from realistic aerospace vehicles and the scattering by complex media.

  10. DKIST Adaptive Optics System: Simulation Results

    NASA Astrophysics Data System (ADS)

    Marino, Jose; Schmidt, Dirk

    2016-05-01

    The 4 m class Daniel K. Inouye Solar Telescope (DKIST), currently under construction, will be equipped with an ultra high order solar adaptive optics (AO) system. The requirements and capabilities of such a solar AO system are beyond those of any other solar AO system currently in operation. We must rely on solar AO simulations to estimate and quantify its performance.We present performance estimation results of the DKIST AO system obtained with a new solar AO simulation tool. This simulation tool is a flexible and fast end-to-end solar AO simulator which produces accurate solar AO simulations while taking advantage of current multi-core computer technology. It relies on full imaging simulations of the extended field Shack-Hartmann wavefront sensor (WFS), which directly includes important secondary effects such as field dependent distortions and varying contrast of the WFS sub-aperture images.

  11. Can surgical simulation be used to train detection and classification of neural networks?

    PubMed

    Zisimopoulos, Odysseas; Flouty, Evangello; Stacey, Mark; Muscroft, Sam; Giataganas, Petros; Nehme, Jean; Chow, Andre; Stoyanov, Danail

    2017-10-01

    Computer-assisted interventions (CAI) aim to increase the effectiveness, precision and repeatability of procedures to improve surgical outcomes. The presence and motion of surgical tools is a key information input for CAI surgical phase recognition algorithms. Vision-based tool detection and recognition approaches are an attractive solution and can be designed to take advantage of the powerful deep learning paradigm that is rapidly advancing image recognition and classification. The challenge for such algorithms is the availability and quality of labelled data used for training. In this Letter, surgical simulation is used to train tool detection and segmentation based on deep convolutional neural networks and generative adversarial networks. The authors experiment with two network architectures for image segmentation in tool classes commonly encountered during cataract surgery. A commercially-available simulator is used to create a simulated cataract dataset for training models prior to performing transfer learning on real surgical data. To the best of authors' knowledge, this is the first attempt to train deep learning models for surgical instrument detection on simulated data while demonstrating promising results to generalise on real data. Results indicate that simulated data does have some potential for training advanced classification methods for CAI systems.

  12. Kinetic Simulation and Energetic Neutral Atom Imaging of the Magnetosphere

    NASA Technical Reports Server (NTRS)

    Fok, Mei-Ching H.

    2011-01-01

    Advanced simulation tools and measurement techniques have been developed to study the dynamic magnetosphere and its response to drivers in the solar wind. The Comprehensive Ring Current Model (CRCM) is a kinetic code that solves the 3D distribution in space, energy and pitch-angle information of energetic ions and electrons. Energetic Neutral Atom (ENA) imagers have been carried in past and current satellite missions. Global morphology of energetic ions were revealed by the observed ENA images. We have combined simulation and ENA analysis techniques to study the development of ring current ions during magnetic storms and substorms. We identify the timing and location of particle injection and loss. We examine the evolution of ion energy and pitch-angle distribution during different phases of a storm. In this talk we will discuss the findings from our ring current studies and how our simulation and ENA analysis tools can be applied to the upcoming TRIO-CINAMA mission.

  13. Interactive graphic editing tools in bioluminescent imaging simulation

    NASA Astrophysics Data System (ADS)

    Li, Hui; Tian, Jie; Luo, Jie; Wang, Ge; Cong, Wenxiang

    2005-04-01

    It is a challenging task to accurately describe complicated biological tissues and bioluminescent sources in bioluminescent imaging simulation. Several graphic editing tools have been developed to efficiently model each part of the bioluminescent simulation environment and to interactively correct or improve the initial models of anatomical structures or bioluminescent sources. There are two major types of graphic editing tools: non-interactive tools and interactive tools. Geometric building blocks (i.e. regular geometric graphics and superquadrics) are applied as non-interactive tools. To a certain extent, complicated anatomical structures and bioluminescent sources can be approximately modeled by combining a sufficient large number of geometric building blocks with Boolean operators. However, those models are too simple to describe the local features and fine changes in 2D/3D irregular contours. Therefore, interactive graphic editing tools have been developed to facilitate the local modifications of any initial surface model. With initial models composed of geometric building blocks, interactive spline mode is applied to conveniently perform dragging and compressing operations on 2D/3D local surface of biological tissues and bioluminescent sources inside the region/volume of interest. Several applications of the interactive graphic editing tools will be presented in this article.

  14. ImaSim, a software tool for basic education of medical x-ray imaging in radiotherapy and radiology

    NASA Astrophysics Data System (ADS)

    Landry, Guillaume; deBlois, François; Verhaegen, Frank

    2013-11-01

    Introduction: X-ray imaging is an important part of medicine and plays a crucial role in radiotherapy. Education in this field is mostly limited to textbook teaching due to equipment restrictions. A novel simulation tool, ImaSim, for teaching the fundamentals of the x-ray imaging process based on ray-tracing is presented in this work. ImaSim is used interactively via a graphical user interface (GUI). Materials and methods: The software package covers the main x-ray based medical modalities: planar kilo voltage (kV), planar (portal) mega voltage (MV), fan beam computed tomography (CT) and cone beam CT (CBCT) imaging. The user can modify the photon source, object to be imaged and imaging setup with three-dimensional editors. Objects are currently obtained by combining blocks with variable shapes. The imaging of three-dimensional voxelized geometries is currently not implemented, but can be added in a later release. The program follows a ray-tracing approach, ignoring photon scatter in its current implementation. Simulations of a phantom CT scan were generated in ImaSim and were compared to measured data in terms of CT number accuracy. Spatial variations in the photon fluence and mean energy from an x-ray tube caused by the heel effect were estimated from ImaSim and Monte Carlo simulations and compared. Results: In this paper we describe ImaSim and provide two examples of its capabilities. CT numbers were found to agree within 36 Hounsfield Units (HU) for bone, which corresponds to a 2% attenuation coefficient difference. ImaSim reproduced the heel effect reasonably well when compared to Monte Carlo simulations. Discussion: An x-ray imaging simulation tool is made available for teaching and research purposes. ImaSim provides a means to facilitate the teaching of medical x-ray imaging.

  15. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies.

    PubMed

    Häggström, Ida; Beattie, Bradley J; Schmidtlein, C Ross

    2016-06-01

    To develop and evaluate a fast and simple tool called dpetstep (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. The tool was developed in matlab using both new and previously reported modules of petstep (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). dpetstep was 8000 times faster than MC. Dynamic images from dpetstep had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dpetstep and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dpetstep images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dpetstep images and noise properties agreed better with MC. The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dpetstep to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dpetstep can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.

  16. [Development of a Text-Data Based Learning Tool That Integrates Image Processing and Displaying].

    PubMed

    Shinohara, Hiroyuki; Hashimoto, Takeyuki

    2015-01-01

    We developed a text-data based learning tool that integrates image processing and displaying by Excel. Knowledge required for programing this tool is limited to using absolute, relative, and composite cell references and learning approximately 20 mathematical functions available in Excel. The new tool is capable of resolution translation, geometric transformation, spatial-filter processing, Radon transform, Fourier transform, convolutions, correlations, deconvolutions, wavelet transform, mutual information, and simulation of proton density-, T1-, and T2-weighted MR images. The processed images of 128 x 128 pixels or 256 x 256 pixels are observed directly within Excel worksheets without using any particular image display software. The results of image processing using this tool were compared with those using C language and the new tool was judged to have sufficient accuracy to be practically useful. The images displayed on Excel worksheets were compared with images using binary-data display software. This comparison indicated that the image quality of the Excel worksheets was nearly equal to the latter in visual impressions. Since image processing is performed by using text-data, the process is visible and facilitates making contrasts by using mathematical equations within the program. We concluded that the newly developed tool is adequate as a computer-assisted learning tool for use in medical image processing.

  17. Monte Carlo simulations in X-ray imaging

    NASA Astrophysics Data System (ADS)

    Giersch, Jürgen; Durst, Jürgen

    2008-06-01

    Monte Carlo simulations have become crucial tools in many fields of X-ray imaging. They help to understand the influence of physical effects such as absorption, scattering and fluorescence of photons in different detector materials on image quality parameters. They allow studying new imaging concepts like photon counting, energy weighting or material reconstruction. Additionally, they can be applied to the fields of nuclear medicine to define virtual setups studying new geometries or image reconstruction algorithms. Furthermore, an implementation of the propagation physics of electrons and photons allows studying the behavior of (novel) X-ray generation concepts. This versatility of Monte Carlo simulations is illustrated with some examples done by the Monte Carlo simulation ROSI. An overview of the structure of ROSI is given as an example of a modern, well-proven, object-oriented, parallel computing Monte Carlo simulation for X-ray imaging.

  18. Efficient generation of image chips for training deep learning algorithms

    NASA Astrophysics Data System (ADS)

    Han, Sanghui; Fafard, Alex; Kerekes, John; Gartley, Michael; Ientilucci, Emmett; Savakis, Andreas; Law, Charles; Parhan, Jason; Turek, Matt; Fieldhouse, Keith; Rovito, Todd

    2017-05-01

    Training deep convolutional networks for satellite or aerial image analysis often requires a large amount of training data. For a more robust algorithm, training data need to have variations not only in the background and target, but also radiometric variations in the image such as shadowing, illumination changes, atmospheric conditions, and imaging platforms with different collection geometry. Data augmentation is a commonly used approach to generating additional training data. However, this approach is often insufficient in accounting for real world changes in lighting, location or viewpoint outside of the collection geometry. Alternatively, image simulation can be an efficient way to augment training data that incorporates all these variations, such as changing backgrounds, that may be encountered in real data. The Digital Imaging and Remote Sensing Image Image Generation (DIRSIG) model is a tool that produces synthetic imagery using a suite of physics-based radiation propagation modules. DIRSIG can simulate images taken from different sensors with variation in collection geometry, spectral response, solar elevation and angle, atmospheric models, target, and background. Simulation of Urban Mobility (SUMO) is a multi-modal traffic simulation tool that explicitly models vehicles that move through a given road network. The output of the SUMO model was incorporated into DIRSIG to generate scenes with moving vehicles. The same approach was used when using helicopters as targets, but with slight modifications. Using the combination of DIRSIG and SUMO, we quickly generated many small images, with the target at the center with different backgrounds. The simulations generated images with vehicles and helicopters as targets, and corresponding images without targets. Using parallel computing, 120,000 training images were generated in about an hour. Some preliminary results show an improvement in the deep learning algorithm when real image training data are augmented with the simulated images, especially when obtaining sufficient real data was particularly challenging.

  19. A software system for the simulation of chest lesions

    NASA Astrophysics Data System (ADS)

    Ryan, John T.; McEntee, Mark; Barrett, Saoirse; Evanoff, Michael; Manning, David; Brennan, Patrick

    2007-03-01

    We report on the development of a novel software tool for the simulation of chest lesions. This software tool was developed for use in our study to attain optimal ambient lighting conditions for chest radiology. This study involved 61 consultant radiologists from the American Board of Radiology. Because of its success, we intend to use the same tool for future studies. The software has two main functions: the simulation of lesions and retrieval of information for ROC (Receiver Operating Characteristic) and JAFROC (Jack-Knife Free Response ROC) analysis. The simulation layer operates by randomly selecting an image from a bank of reportedly normal chest x-rays. A random location is then generated for each lesion, which is checked against a reference lung-map. If the location is within the lung fields, as derived from the lung-map, a lesion is superimposed. Lesions are also randomly selected from a bank of manually created chest lesion images. A blending algorithm determines which are the best intensity levels for the lesion to sit naturally within the chest x-ray. The same software was used to run a study for all 61 radiologists. A sequence of images is displayed in random order. Half of these images had simulated lesions, ranging from subtle to obvious, and half of the images were normal. The operator then selects locations where he/she thinks lesions exist and grades the lesion accordingly. We have found that this software was very effective in this study and intend to use the same principles for future studies.

  20. Simulation of the Simbol-X telescope: imaging performance of a deformable x-ray telescope

    NASA Astrophysics Data System (ADS)

    Chauvin, Maxime; Roques, Jean-Pierre

    2009-08-01

    We have developed a simulation tool for a Wolter I telescope subject to deformations. The aim is to understand and predict the behavior of Simbol-X and other future missions (NuSTAR, Astro-H, IXO, ...). Our code, based on Monte-Carlo ray-tracing, computes the full photon trajectories up to the detector plane, along with the deformations. The degradation of the imaging system is corrected using metrology. This tool allows to perform many analyzes in order to optimize the configuration of any of these telescopes.

  1. MASTOS: Mammography Simulation Tool for design Optimization Studies.

    PubMed

    Spyrou, G; Panayiotakis, G; Tzanakos, G

    2000-01-01

    Mammography is a high quality imaging technique for the detection of breast lesions, which requires dedicated equipment and optimum operation. The design parameters of a mammography unit have to be decided and evaluated before the construction of such a high cost of apparatus. The optimum operational parameters also must be defined well before the real breast examination. MASTOS is a software package, based on Monte Carlo methods, that is designed to be used as a simulation tool in mammography. The input consists of the parameters that have to be specified when using a mammography unit, and also the parameters specifying the shape and composition of the breast phantom. In addition, the input may specify parameters needed in the design of a new mammographic apparatus. The main output of the simulation is a mammographic image and calculations of various factors that describe the image quality. The Monte Carlo simulation code is PC-based and is driven by an outer shell of a graphical user interface. The entire software package is a simulation tool for mammography and can be applied in basic research and/or in training in the fields of medical physics and biomedical engineering as well as in the performance evaluation of new designs of mammography units and in the determination of optimum standards for the operational parameters of a mammography unit.

  2. Simulation based planning of surgical interventions in pediatric cardiology

    NASA Astrophysics Data System (ADS)

    Marsden, Alison L.

    2013-10-01

    Hemodynamics plays an essential role in the progression and treatment of cardiovascular disease. However, while medical imaging provides increasingly detailed anatomical information, clinicians often have limited access to hemodynamic data that may be crucial to patient risk assessment and treatment planning. Computational simulations can now provide detailed hemodynamic data to augment clinical knowledge in both adult and pediatric applications. There is a particular need for simulation tools in pediatric cardiology, due to the wide variation in anatomy and physiology in congenital heart disease patients, necessitating individualized treatment plans. Despite great strides in medical imaging, enabling extraction of flow information from magnetic resonance and ultrasound imaging, simulations offer predictive capabilities that imaging alone cannot provide. Patient specific simulations can be used for in silico testing of new surgical designs, treatment planning, device testing, and patient risk stratification. Furthermore, simulations can be performed at no direct risk to the patient. In this paper, we outline the current state of the art in methods for cardiovascular blood flow simulation and virtual surgery. We then step through pressing challenges in the field, including multiscale modeling, boundary condition selection, optimization, and uncertainty quantification. Finally, we summarize simulation results of two representative examples from pediatric cardiology: single ventricle physiology, and coronary aneurysms caused by Kawasaki disease. These examples illustrate the potential impact of computational modeling tools in the clinical setting.

  3. Optronic System Imaging Simulator (OSIS): imager simulation tool of the ECOMOS project

    NASA Astrophysics Data System (ADS)

    Wegner, D.; Repasi, E.

    2018-04-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defense and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses two approaches to calculate Target Acquisition (TA) ranges, the analytical TRM4 model and the image-based Triangle Orientation Discrimination model (TOD). In this paper the IR imager simulation tool, Optronic System Imaging Simulator (OSIS), is presented. It produces virtual camera imagery required by the TOD approach. Pristine imagery is degraded by various effects caused by atmospheric attenuation, optics, detector footprint, sampling, fixed pattern noise, temporal noise and digital signal processing. Resulting images might be presented to observers or could be further processed for automatic image quality calculations. For convenience OSIS incorporates camera descriptions and intermediate results provided by TRM4. For input OSIS uses pristine imagery tied with meta information about scene content, its physical dimensions, and gray level interpretation. These images represent planar targets placed at specified distances to the imager. Furthermore, OSIS is extended by a plugin functionality that enables integration of advanced digital signal processing techniques in ECOMOS such as compression, local contrast enhancement, digital turbulence mitiga- tion, to name but a few. By means of this image-based approach image degradations and image enhancements can be investigated, which goes beyond the scope of the analytical TRM4 model.

  4. Infrared imagery acquisition process supporting simulation and real image training

    NASA Astrophysics Data System (ADS)

    O'Connor, John

    2012-05-01

    The increasing use of infrared sensors requires development of advanced infrared training and simulation tools to meet current Warfighter needs. In order to prepare the force, a challenge exists for training and simulation images to be both realistic and consistent with each other to be effective and avoid negative training. The US Army Night Vision and Electronic Sensors Directorate has corrected this deficiency by developing and implementing infrared image collection methods that meet the needs of both real image trainers and real-time simulations. The author presents innovative methods for collection of high-fidelity digital infrared images and the associated equipment and environmental standards. The collected images are the foundation for US Army, and USMC Recognition of Combat Vehicles (ROC-V) real image combat ID training and also support simulations including the Night Vision Image Generator and Synthetic Environment Core. The characteristics, consistency, and quality of these images have contributed to the success of these and other programs. To date, this method has been employed to generate signature sets for over 350 vehicles. The needs of future physics-based simulations will also be met by this data. NVESD's ROC-V image database will support the development of training and simulation capabilities as Warfighter needs evolve.

  5. SimVascular: An Open Source Pipeline for Cardiovascular Simulation.

    PubMed

    Updegrove, Adam; Wilson, Nathan M; Merkow, Jameson; Lan, Hongzhi; Marsden, Alison L; Shadden, Shawn C

    2017-03-01

    Patient-specific cardiovascular simulation has become a paradigm in cardiovascular research and is emerging as a powerful tool in basic, translational and clinical research. In this paper we discuss the recent development of a fully open-source SimVascular software package, which provides a complete pipeline from medical image data segmentation to patient-specific blood flow simulation and analysis. This package serves as a research tool for cardiovascular modeling and simulation, and has contributed to numerous advances in personalized medicine, surgical planning and medical device design. The SimVascular software has recently been refactored and expanded to enhance functionality, usability, efficiency and accuracy of image-based patient-specific modeling tools. Moreover, SimVascular previously required several licensed components that hindered new user adoption and code management and our recent developments have replaced these commercial components to create a fully open source pipeline. These developments foster advances in cardiovascular modeling research, increased collaboration, standardization of methods, and a growing developer community.

  6. The numerical simulation tool for the MAORY multiconjugate adaptive optics system

    NASA Astrophysics Data System (ADS)

    Arcidiacono, C.; Schreiber, L.; Bregoli, G.; Diolaiti, E.; Foppiani, I.; Agapito, G.; Puglisi, A.; Xompero, M.; Oberti, S.; Cosentino, G.; Lombini, M.; Butler, R. C.; Ciliegi, P.; Cortecchia, F.; Patti, M.; Esposito, S.; Feautrier, P.

    2016-07-01

    The Multiconjugate Adaptive Optics RelaY (MAORY) is and Adaptive Optics module to be mounted on the ESO European-Extremely Large Telescope (E-ELT). It is an hybrid Natural and Laser Guide System that will perform the correction of the atmospheric turbulence volume above the telescope feeding the Multi-AO Imaging Camera for Deep Observations Near Infrared spectro-imager (MICADO). We developed an end-to-end Monte- Carlo adaptive optics simulation tool to investigate the performance of a the MAORY and the calibration, acquisition, operation strategies. MAORY will implement Multiconjugate Adaptive Optics combining Laser Guide Stars (LGS) and Natural Guide Stars (NGS) measurements. The simulation tool implement the various aspect of the MAORY in an end to end fashion. The code has been developed using IDL and use libraries in C++ and CUDA for efficiency improvements. Here we recall the code architecture, we describe the modeled instrument components and the control strategies implemented in the code.

  7. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Häggström, Ida, E-mail: haeggsti@mskcc.org; Beattie, Bradley J.; Schmidtlein, C. Ross

    2016-06-15

    Purpose: To develop and evaluate a fast and simple tool called dPETSTEP (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. Methods: The tool was developed in MATLAB using both new and previously reported modules of PETSTEP (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuationmore » are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). Results: dPETSTEP was 8000 times faster than MC. Dynamic images from dPETSTEP had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dPETSTEP and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dPETSTEP images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dPETSTEP images and noise properties agreed better with MC. Conclusions: The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dPETSTEP to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dPETSTEP can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.« less

  8. Physics-based subsurface visualization of human tissue.

    PubMed

    Sharp, Richard; Adams, Jacob; Machiraju, Raghu; Lee, Robert; Crane, Robert

    2007-01-01

    In this paper, we present a framework for simulating light transport in three-dimensional tissue with inhomogeneous scattering properties. Our approach employs a computational model to simulate light scattering in tissue through the finite element solution of the diffusion equation. Although our model handles both visible and nonvisible wavelengths, we especially focus on the interaction of near infrared (NIR) light with tissue. Since most human tissue is permeable to NIR light, tools to noninvasively image tumors, blood vasculature, and monitor blood oxygenation levels are being constructed. We apply this model to a numerical phantom to visually reproduce the images generated by these real-world tools. Therefore, in addition to enabling inverse design of detector instruments, our computational tools produce physically-accurate visualizations of subsurface structures.

  9. Modeling and performance assessment in QinetiQ of EO and IR airborne reconnaissance systems

    NASA Astrophysics Data System (ADS)

    Williams, John W.; Potter, Gary E.

    2002-11-01

    QinetiQ are the technical authority responsible for specifying the performance requirements for the procurement of airborne reconnaissance systems, on behalf of the UK MoD. They are also responsible for acceptance of delivered systems, overseeing and verifying the installed system performance as predicted and then assessed by the contractor. Measures of functional capability are central to these activities. The conduct of these activities utilises the broad technical insight and wide range of analysis tools and models available within QinetiQ. This paper focuses on the tools, methods and models that are applicable to systems based on EO and IR sensors. The tools, methods and models are described, and representative output for systems that QinetiQ has been responsible for is presented. The principle capability applicable to EO and IR airborne reconnaissance systems is the STAR (Simulation Tools for Airborne Reconnaissance) suite of models. STAR generates predictions of performance measures such as GRD (Ground Resolved Distance) and GIQE (General Image Quality) NIIRS (National Imagery Interpretation Rating Scales). It also generates images representing sensor output, using the scene generation software CAMEO-SIM and the imaging sensor model EMERALD. The simulated image 'quality' is fully correlated with the predicted non-imaging performance measures. STAR also generates image and table data that is compliant with STANAG 7023, which may be used to test ground station functionality.

  10. Exoplanet Yield Estimation for Decadal Study Concepts using EXOSIMS

    NASA Astrophysics Data System (ADS)

    Morgan, Rhonda; Lowrance, Patrick; Savransky, Dmitry; Garrett, Daniel

    2016-01-01

    The anticipated upcoming large mission study concepts for the direct imaging of exo-earths present an exciting opportunity for exoplanet discovery and characterization. While these telescope concepts would also be capable of conducting a broad range of astrophysical investigations, the most difficult technology challenges are driven by the requirements for imaging exo-earths. The exoplanet science yield for these mission concepts will drive design trades and mission concept comparisons.To assist in these trade studies, the Exoplanet Exploration Program Office (ExEP) is developing a yield estimation tool that emphasizes transparency and consistent comparison of various design concepts. The tool will provide a parametric estimate of science yield of various mission concepts using contrast curves from physics-based model codes and Monte Carlo simulations of design reference missions using realistic constraints, such as solar avoidance angles, the observatory orbit, propulsion limitations of star shades, the accessibility of candidate targets, local and background zodiacal light levels, and background confusion by stars and galaxies. The python tool utilizes Dmitry Savransky's EXOSIMS (Exoplanet Open-Source Imaging Mission Simulator) design reference mission simulator that is being developed for the WFIRST Preliminary Science program. ExEP is extending and validating the tool for future mission concepts under consideration for the upcoming 2020 decadal review. We present a validation plan and preliminary yield results for a point design.

  11. Simulation System for Training in Laparoscopic Surgery

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay; Ho, Chih-Hao

    2003-01-01

    A computer-based simulation system creates a visual and haptic virtual environment for training a medical practitioner in laparoscopic surgery. Heretofore, it has been common practice to perform training in partial laparoscopic surgical procedures by use of a laparoscopic training box that encloses a pair of laparoscopic tools, objects to be manipulated by the tools, and an endoscopic video camera. However, the surgical procedures simulated by use of a training box are usually poor imitations of the actual ones. The present computer-based system improves training by presenting a more realistic simulated environment to the trainee. The system includes a computer monitor that displays a real-time image of the affected interior region of the patient, showing laparoscopic instruments interacting with organs and tissues, as would be viewed by use of an endoscopic video camera and displayed to a surgeon during a laparoscopic operation. The system also includes laparoscopic tools that the trainee manipulates while observing the image on the computer monitor (see figure). The instrumentation on the tools consists of (1) position and orientation sensors that provide input data for the simulation and (2) actuators that provide force feedback to simulate the contact forces between the tools and tissues. The simulation software includes components that model the geometries of surgical tools, components that model the geometries and physical behaviors of soft tissues, and components that detect collisions between them. Using the measured positions and orientations of the tools, the software detects whether they are in contact with tissues. In the event of contact, the deformations of the tissues and contact forces are computed by use of the geometric and physical models. The image on the computer screen shows tissues deformed accordingly, while the actuators apply the corresponding forces to the distal ends of the tools. For the purpose of demonstration, the system has been set up to simulate the insertion of a flexible catheter in a bile duct. [As thus configured, the system can also be used to simulate other endoscopic procedures (e.g., bronchoscopy and colonoscopy) that include the insertion of flexible tubes into flexible ducts.] A hybrid approach has been followed in developing the software for real-time simulation of the visual and haptic interactions (1) between forceps and the catheter, (2) between the forceps and the duct, and (3) between the catheter and the duct. The deformations of the duct are simulated by finite-element and modalanalysis procedures, using only the most significant vibration modes of the duct for computing deformations and interaction forces. The catheter is modeled as a set of virtual particles uniformly distributed along the center line of the catheter and connected to each other via linear and torsional springs and damping elements. The interactions between the forceps and the duct as well as the catheter are simulated by use of a ray-based haptic-interaction- simulating technique in which the forceps are modeled as connected line segments.

  12. Validation of a low dose simulation technique for computed tomography images.

    PubMed

    Muenzel, Daniela; Koehler, Thomas; Brown, Kevin; Zabić, Stanislav; Fingerle, Alexander A; Waldt, Simone; Bendik, Edgar; Zahel, Tina; Schneider, Armin; Dobritz, Martin; Rummeny, Ernst J; Noël, Peter B

    2014-01-01

    Evaluation of a new software tool for generation of simulated low-dose computed tomography (CT) images from an original higher dose scan. Original CT scan data (100 mAs, 80 mAs, 60 mAs, 40 mAs, 20 mAs, 10 mAs; 100 kV) of a swine were acquired (approved by the regional governmental commission for animal protection). Simulations of CT acquisition with a lower dose (simulated 10-80 mAs) were calculated using a low-dose simulation algorithm. The simulations were compared to the originals of the same dose level with regard to density values and image noise. Four radiologists assessed the realistic visual appearance of the simulated images. Image characteristics of simulated low dose scans were similar to the originals. Mean overall discrepancy of image noise and CT values was -1.2% (range -9% to 3.2%) and -0.2% (range -8.2% to 3.2%), respectively, p>0.05. Confidence intervals of discrepancies ranged between 0.9-10.2 HU (noise) and 1.9-13.4 HU (CT values), without significant differences (p>0.05). Subjective observer evaluation of image appearance showed no visually detectable difference. Simulated low dose images showed excellent agreement with the originals concerning image noise, CT density values, and subjective assessment of the visual appearance of the simulated images. An authentic low-dose simulation opens up opportunity with regard to staff education, protocol optimization and introduction of new techniques.

  13. Direct Geolocation of Satellite Images with the EO-CFI Libraries

    NASA Astrophysics Data System (ADS)

    de Miguel, Eduardo; Prado, Elena; Estebanez, Monica; Martin, Ana I.; Gonzalez, Malena

    2016-08-01

    The INTA Remote Sensing Laboratory has implemented a tool for the direct geolocation of satellite images. The core of the tool is a C code based on the "Earth Observation Mission CFI SW" from ESA. The tool accepts different types of inputs for satellite attitude (euler angles, quaternions, default attitude models). Satellite position can be provided either in ECEF or ECI coordinates. The line of sight of each individual detector is imported from an external file or is generated by the tool from camera parameters. Global DEM ACE2 is used to define ground intersection of the LOS.The tool has been already tailored for georeferencing images from the forthcoming Spanish Earth Observation mission SEOSat/Ingenio, and for the camera APIS onboard the INTA cubesat OPTOS. The next step is to configure it for the geolocation of Sentinel 2 L1b images.The tool has been internally validated by different means. This validation shows that the tool is suitable for georeferencing images from high spatial resolution missions. As part of the validation efforts, a code for simulating orbital info for LEO missions using EO-CFI has been produced.

  14. Optical asymmetric image encryption using gyrator wavelet transform

    NASA Astrophysics Data System (ADS)

    Mehra, Isha; Nishchal, Naveen K.

    2015-11-01

    In this paper, we propose a new optical information processing tool termed as gyrator wavelet transform to secure a fully phase image, based on amplitude- and phase-truncation approach. The gyrator wavelet transform constitutes four basic parameters; gyrator transform order, type and level of mother wavelet, and position of different frequency bands. These parameters are used as encryption keys in addition to the random phase codes to the optical cryptosystem. This tool has also been applied for simultaneous compression and encryption of an image. The system's performance and its sensitivity to the encryption parameters, such as, gyrator transform order, and robustness has also been analyzed. It is expected that this tool will not only update current optical security systems, but may also shed some light on future developments. The computer simulation results demonstrate the abilities of the gyrator wavelet transform as an effective tool, which can be used in various optical information processing applications, including image encryption, and image compression. Also this tool can be applied for securing the color image, multispectral, and three-dimensional images.

  15. Aperture tolerances for neutron-imaging systems in inertial confinement fusion.

    PubMed

    Ghilea, M C; Sangster, T C; Meyerhofer, D D; Lerche, R A; Disdier, L

    2008-02-01

    Neutron-imaging systems are being considered as an ignition diagnostic for the National Ignition Facility (NIF) [Hogan et al., Nucl. Fusion 41, 567 (2001)]. Given the importance of these systems, a neutron-imaging design tool is being used to quantify the effects of aperture fabrication and alignment tolerances on reconstructed neutron images for inertial confinement fusion. The simulations indicate that alignment tolerances of more than 1 mrad would introduce measurable features in a reconstructed image for both pinholes and penumbral aperture systems. These simulations further show that penumbral apertures are several times less sensitive to fabrication errors than pinhole apertures.

  16. MO-DE-BRA-06: MrRSCAL: A Radiological Simulation Tool for Resident Education

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, W; Yanasak, N

    Purpose: The goal of this project was to create a readily accessible, comprehensive-yet-flexible interactive MRI simulation tool for use in training and education of radiology residents in particular. This tool was developed to take the place of an MR scanner in laboratory activities, as magnet time has become scarce while hospitals are optimizing clinical scheduling for improved throughput. Methods: MrRSCAL (Magnetic resonance Resident Simulation Console for Active Learning) was programmed and coded using Matlab on a Mac workstation utilizing OS X platform. MR-based brain images were obtained from one of the co-authors and processed to generate parametric maps. Scanner soundsmore » are also generated via mp3 convolution of a single MR gradient slew with a time-profile of gradient waveforms. Results: MrRSCAL facilitates the simulation of multiple MR sequences with the ability to alter MR parameters via an intuitive GUI control panel. The application allows the user to gain real-time understanding of image transformation when varying these said parameters by examining the resulting images. Lab procedures can be loaded and displayed for more directed study. The panel is also configurable, providing a simple interface for elementary labs or a full array of controls for the expert user. Conclusion: Our introduction of MrRSCAL, which is readily available to users with a current laptop or workstation, allows for individual or group study of MR image acquisition with immediate educational feedback as the MR parameters are manipulated. MrRSCAL can be used at any time and any place once installed, offering a new tool for reviewing relaxometric and artifact principles when studying for boards or investigating properties of a pulse sequence. This tool promises to be extremely useful in conveying traditionally difficult and abstract concepts involved with MR to the radiology resident and other medical professionals at large.« less

  17. PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation

    NASA Astrophysics Data System (ADS)

    España, S; Herraiz, J L; Vicente, E; Vaquero, J J; Desco, M; Udias, J M

    2009-03-01

    Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.

  18. Development of computational small animal models and their applications in preclinical imaging and therapy research.

    PubMed

    Xie, Tianwu; Zaidi, Habib

    2016-01-01

    The development of multimodality preclinical imaging techniques and the rapid growth of realistic computer simulation tools have promoted the construction and application of computational laboratory animal models in preclinical research. Since the early 1990s, over 120 realistic computational animal models have been reported in the literature and used as surrogates to characterize the anatomy of actual animals for the simulation of preclinical studies involving the use of bioluminescence tomography, fluorescence molecular tomography, positron emission tomography, single-photon emission computed tomography, microcomputed tomography, magnetic resonance imaging, and optical imaging. Other applications include electromagnetic field simulation, ionizing and nonionizing radiation dosimetry, and the development and evaluation of new methodologies for multimodality image coregistration, segmentation, and reconstruction of small animal images. This paper provides a comprehensive review of the history and fundamental technologies used for the development of computational small animal models with a particular focus on their application in preclinical imaging as well as nonionizing and ionizing radiation dosimetry calculations. An overview of the overall process involved in the design of these models, including the fundamental elements used for the construction of different types of computational models, the identification of original anatomical data, the simulation tools used for solving various computational problems, and the applications of computational animal models in preclinical research. The authors also analyze the characteristics of categories of computational models (stylized, voxel-based, and boundary representation) and discuss the technical challenges faced at the present time as well as research needs in the future.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dias, M F; Department of Radiation Oncology, Francis H. Burr Proton Therapy Center Massachusetts General Hospital; Seco, J

    Purpose: Research in carbon imaging has been growing over the past years, as a way to increase treatment accuracy and patient positioning in carbon therapy. The purpose of this tool is to allow a fast and flexible way to generate CDRR data without the need to use Monte Carlo (MC) simulations. It can also be used to predict future clinically measured data. Methods: A python interface has been developed, which uses information from CT or 4DCT and thetreatment calibration curve to compute the Water Equivalent Path Length (WEPL) of carbon ions. A GPU based ray tracing algorithm computes the WEPLmore » of each individual carbon traveling through the CT voxels. A multiple peak detection method to estimate high contrast margin positioning has been implemented (described elsewhere). MC simulations have been used to simulate carbons depth dose curves in order to simulate the response of a range detector. Results: The tool allows the upload of CT or 4DCT images. The user has the possibility to selectphase/slice of interested as well as position, angle…). The WEPL is represented as a range detector which can be used to assess range dilution and multiple peak detection effects. The tool also provides knowledge of the minimum energy that should be considered for imaging purposes. The multiple peak detection method has been used in a lung tumor case, showing an accuracy of 1mm in determine the exact interface position. Conclusion: The tool offers an easy and fast way to simulate carbon imaging data. It can be used for educational and for clinical purposes, allowing the user to test beam energies and angles before real acquisition. An analysis add-on is being developed, where the used will have the opportunity to select different reconstruction methods and detector types (range or energy). Fundacao para a Ciencia e a Tecnologia (FCT), PhD Grant number SFRH/BD/85749/2012.« less

  20. Use of Airborne Hyperspectral Data in the Simulation of Satellite Images

    NASA Astrophysics Data System (ADS)

    de Miguel, Eduardo; Jimenez, Marcos; Ruiz, Elena; Salido, Elena; Gutierrez de la Camara, Oscar

    2016-08-01

    The simulation of future images is part of the development phase of most Earth Observation missions. This simulation uses frequently as starting point images acquired from airborne instruments. These instruments provide the required flexibility in acquisition parameters (time, date, illumination and observation geometry...) and high spectral and spatial resolution, well above the target values (as required by simulation tools). However, there are a number of important problems hampering the use of airborne imagery. One of these problems is that observation zenith angles (OZA), are far from those that the misisons to be simulated would use.We examine this problem by evaluating the difference in ground reflectance estimated from airborne images for different observation/illumination geometries. Next, we analyze a solution for simulation purposes, in which a Bi- directional Reflectance Distribution Function (BRDF) model is attached to an image of the isotropic surface reflectance. The results obtained confirm the need for reflectance anisotropy correction when using airborne images for creating a reflectance map for simulation purposes. But this correction should not be used without providing the corresponding estimation of BRDF, in the form of model parameters, to the simulation teams.

  1. The Electronic View Box: a software tool for radiation therapy treatment verification.

    PubMed

    Bosch, W R; Low, D A; Gerber, R L; Michalski, J M; Graham, M V; Perez, C A; Harms, W B; Purdy, J A

    1995-01-01

    We have developed a software tool for interactively verifying treatment plan implementation. The Electronic View Box (EVB) tool copies the paradigm of current practice but does so electronically. A portal image (online portal image or digitized port film) is displayed side by side with a prescription image (digitized simulator film or digitally reconstructed radiograph). The user can measure distances between features in prescription and portal images and "write" on the display, either to approve the image or to indicate required corrective actions. The EVB tool also provides several features not available in conventional verification practice using a light box. The EVB tool has been written in ANSI C using the X window system. The tool makes use of the Virtual Machine Platform and Foundation Library specifications of the NCI-sponsored Radiation Therapy Planning Tools Collaborative Working Group for portability into an arbitrary treatment planning system that conforms to these specifications. The present EVB tool is based on an earlier Verification Image Review tool, but with a substantial redesign of the user interface. A graphical user interface prototyping system was used in iteratively refining the tool layout to allow rapid modifications of the interface in response to user comments. Features of the EVB tool include 1) hierarchical selection of digital portal images based on physician name, patient name, and field identifier; 2) side-by-side presentation of prescription and portal images at equal magnification and orientation, and with independent grayscale controls; 3) "trace" facility for outlining anatomical structures; 4) "ruler" facility for measuring distances; 5) zoomed display of corresponding regions in both images; 6) image contrast enhancement; and 7) communication of portal image evaluation results (approval, block modification, repeat image acquisition, etc.). The EVB tool facilitates the rapid comparison of prescription and portal images and permits electronic communication of corrections in port shape and positioning.

  2. Aberration measurement of projection optics in lithographic tools based on two-beam interference theory.

    PubMed

    Ma, Mingying; Wang, Xiangzhao; Wang, Fan

    2006-11-10

    The degradation of image quality caused by aberrations of projection optics in lithographic tools is a serious problem in optical lithography. We propose what we believe to be a novel technique for measuring aberrations of projection optics based on two-beam interference theory. By utilizing the partial coherent imaging theory, a novel model that accurately characterizes the relative image displacement of a fine grating pattern to a large pattern induced by aberrations is derived. Both even and odd aberrations are extracted independently from the relative image displacements of the printed patterns by two-beam interference imaging of the zeroth and positive first orders. The simulation results show that by using this technique we can measure the aberrations present in the lithographic tool with higher accuracy.

  3. Simulation of a compact analyzer-based imaging system with a regular x-ray source

    NASA Astrophysics Data System (ADS)

    Caudevilla, Oriol; Zhou, Wei; Stoupin, Stanislav; Verman, Boris; Brankov, J. G.

    2017-03-01

    Analyzer-based Imaging (ABI) belongs to a broader family of phase-contrast (PC) X-ray techniques. PC measures X-ray deflection phenomena when interacting with a sample, which is known to provide higher contrast images of soft tissue than other X-ray methods. This is of high interest in the medical field, in particular for mammogram applications. This paper presents a simulation tool for table-top ABI systems using a conventional polychromatic X-ray source.

  4. Validation of CT dose-reduction simulation

    PubMed Central

    Massoumzadeh, Parinaz; Don, Steven; Hildebolt, Charles F.; Bae, Kyongtae T.; Whiting, Bruce R.

    2009-01-01

    The objective of this research was to develop and validate a custom computed tomography dose-reduction simulation technique for producing images that have an appearance consistent with the same scan performed at a lower mAs (with fixed kVp, rotation time, and collimation). Synthetic noise is added to projection (sinogram) data, incorporating a stochastic noise model that includes energy-integrating detectors, tube-current modulation, bowtie beam filtering, and electronic system noise. Experimental methods were developed to determine the parameters required for each component of the noise model. As a validation, the outputs of the simulations were compared to measurements with cadavers in the image domain and with phantoms in both the sinogram and image domain, using an unbiased root-mean-square relative error metric to quantify agreement in noise processes. Four-alternative forced-choice (4AFC) observer studies were conducted to confirm the realistic appearance of simulated noise, and the effects of various system model components on visual noise were studied. The “just noticeable difference (JND)” in noise levels was analyzed to determine the sensitivity of observers to changes in noise level. Individual detector measurements were shown to be normally distributed (p>0.54), justifying the use of a Gaussian random noise generator for simulations. Phantom tests showed the ability to match original and simulated noise variance in the sinogram domain to within 5.6%±1.6% (standard deviation), which was then propagated into the image domain with errors less than 4.1%±1.6%. Cadaver measurements indicated that image noise was matched to within 2.6%±2.0%. More importantly, the 4AFC observer studies indicated that the simulated images were realistic, i.e., no detectable difference between simulated and original images (p=0.86) was observed. JND studies indicated that observers’ sensitivity to change in noise levels corresponded to a 25% difference in dose, which is far larger than the noise accuracy achieved by simulation. In summary, the dose-reduction simulation tool demonstrated excellent accuracy in providing realistic images. The methodology promises to be a useful tool for researchers and radiologists to explore dose reduction protocols in an effort to produce diagnostic images with radiation dose “as low as reasonably achievable.” PMID:19235386

  5. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images*

    PubMed Central

    Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G

    2014-01-01

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image-based dosimetry in nuclear medicine. PMID:24200697

  6. ImageParser: a tool for finite element generation from three-dimensional medical images

    PubMed Central

    Yin, HM; Sun, LZ; Wang, G; Yamada, T; Wang, J; Vannier, MW

    2004-01-01

    Background The finite element method (FEM) is a powerful mathematical tool to simulate and visualize the mechanical deformation of tissues and organs during medical examinations or interventions. It is yet a challenge to build up an FEM mesh directly from a volumetric image partially because the regions (or structures) of interest (ROIs) may be irregular and fuzzy. Methods A software package, ImageParser, is developed to generate an FEM mesh from 3-D tomographic medical images. This software uses a semi-automatic method to detect ROIs from the context of image including neighboring tissues and organs, completes segmentation of different tissues, and meshes the organ into elements. Results The ImageParser is shown to build up an FEM model for simulating the mechanical responses of the breast based on 3-D CT images. The breast is compressed by two plate paddles under an overall displacement as large as 20% of the initial distance between the paddles. The strain and tangential Young's modulus distributions are specified for the biomechanical analysis of breast tissues. Conclusion The ImageParser can successfully exact the geometry of ROIs from a complex medical image and generate the FEM mesh with customer-defined segmentation information. PMID:15461787

  7. Method of simulation and visualization of FDG metabolism based on VHP image

    NASA Astrophysics Data System (ADS)

    Cui, Yunfeng; Bai, Jing

    2005-04-01

    FDG ([18F] 2-fluoro-2-deoxy-D-glucose) is the typical tracer used in clinical PET (positron emission tomography) studies. The FDG-PET is an important imaging tool for early diagnosis and treatment of malignant tumor and functional disease. The main purpose of this work is to propose a method that represents FDG metabolism in human body through the simulation and visualization of 18F distribution process dynamically based on the segmented VHP (Visible Human Project) image dataset. First, the plasma time-activity curve (PTAC) and the tissues time-activity curves (TTAC) are obtained from the previous studies and the literatures. According to the obtained PTAC and TTACs, a set of corresponding values are assigned to the segmented VHP image, Thus a set of dynamic images are derived to show the 18F distribution in the concerned tissues for the predetermined sampling schedule. Finally, the simulated FDG distribution images are visualized in 3D and 2D formats, respectively, incorporated with principal interaction functions. As compared with original PET image, our visualization result presents higher resolution because of the high resolution of VHP image data, and show the distribution process of 18F dynamically. The results of our work can be used in education and related research as well as a tool for the PET operator to design their PET experiment program.

  8. Validation of a Low Dose Simulation Technique for Computed Tomography Images

    PubMed Central

    Muenzel, Daniela; Koehler, Thomas; Brown, Kevin; Žabić, Stanislav; Fingerle, Alexander A.; Waldt, Simone; Bendik, Edgar; Zahel, Tina; Schneider, Armin; Dobritz, Martin; Rummeny, Ernst J.; Noël, Peter B.

    2014-01-01

    Purpose Evaluation of a new software tool for generation of simulated low-dose computed tomography (CT) images from an original higher dose scan. Materials and Methods Original CT scan data (100 mAs, 80 mAs, 60 mAs, 40 mAs, 20 mAs, 10 mAs; 100 kV) of a swine were acquired (approved by the regional governmental commission for animal protection). Simulations of CT acquisition with a lower dose (simulated 10–80 mAs) were calculated using a low-dose simulation algorithm. The simulations were compared to the originals of the same dose level with regard to density values and image noise. Four radiologists assessed the realistic visual appearance of the simulated images. Results Image characteristics of simulated low dose scans were similar to the originals. Mean overall discrepancy of image noise and CT values was −1.2% (range −9% to 3.2%) and −0.2% (range −8.2% to 3.2%), respectively, p>0.05. Confidence intervals of discrepancies ranged between 0.9–10.2 HU (noise) and 1.9–13.4 HU (CT values), without significant differences (p>0.05). Subjective observer evaluation of image appearance showed no visually detectable difference. Conclusion Simulated low dose images showed excellent agreement with the originals concerning image noise, CT density values, and subjective assessment of the visual appearance of the simulated images. An authentic low-dose simulation opens up opportunity with regard to staff education, protocol optimization and introduction of new techniques. PMID:25247422

  9. Structural characterization and numerical simulations of flow properties of standard and reservoir carbonate rocks using micro-tomography

    NASA Astrophysics Data System (ADS)

    Islam, Amina; Chevalier, Sylvie; Sassi, Mohamed

    2018-04-01

    With advances in imaging techniques and computational power, Digital Rock Physics (DRP) is becoming an increasingly popular tool to characterize reservoir samples and determine their internal structure and flow properties. In this work, we present the details for imaging, segmentation, as well as numerical simulation of single-phase flow through a standard homogenous Silurian dolomite core plug sample as well as a heterogeneous sample from a carbonate reservoir. We develop a procedure that integrates experimental results into the segmentation step to calibrate the porosity. We also look into using two different numerical tools for the simulation; namely Avizo Fire Xlab Hydro that solves the Stokes' equations via the finite volume method and Palabos that solves the same equations using the Lattice Boltzmann Method. Representative Elementary Volume (REV) and isotropy studies are conducted on the two samples and we show how DRP can be a useful tool to characterize rock properties that are time consuming and costly to obtain experimentally.

  10. Towards an in-plane methodology to track breast lesions using mammograms and patient-specific finite-element simulations

    NASA Astrophysics Data System (ADS)

    Lapuebla-Ferri, Andrés; Cegoñino-Banzo, José; Jiménez-Mocholí, Antonio-José; Pérez del Palomar, Amaya

    2017-11-01

    In breast cancer screening or diagnosis, it is usual to combine different images in order to locate a lesion as accurately as possible. These images are generated using a single or several imaging techniques. As x-ray-based mammography is widely used, a breast lesion is located in the same plane of the image (mammogram), but tracking it across mammograms corresponding to different views is a challenging task for medical physicians. Accordingly, simulation tools and methodologies that use patient-specific numerical models can facilitate the task of fusing information from different images. Additionally, these tools need to be as straightforward as possible to facilitate their translation to the clinical area. This paper presents a patient-specific, finite-element-based and semi-automated simulation methodology to track breast lesions across mammograms. A realistic three-dimensional computer model of a patient’s breast was generated from magnetic resonance imaging to simulate mammographic compressions in cranio-caudal (CC, head-to-toe) and medio-lateral oblique (MLO, shoulder-to-opposite hip) directions. For each compression being simulated, a virtual mammogram was obtained and posteriorly superimposed to the corresponding real mammogram, by sharing the nipple as a common feature. Two-dimensional rigid-body transformations were applied, and the error distance measured between the centroids of the tumors previously located on each image was 3.84 mm and 2.41 mm for CC and MLO compression, respectively. Considering that the scope of this work is to conceive a methodology translatable to clinical practice, the results indicate that it could be helpful in supporting the tracking of breast lesions.

  11. Biological Visualization, Imaging and Simulation(Bio-VIS) at NASA Ames Research Center: Developing New Software and Technology for Astronaut Training and Biology Research in Space

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey

    2003-01-01

    The Bio- Visualization, Imaging and Simulation (BioVIS) Technology Center at NASA's Ames Research Center is dedicated to developing and applying advanced visualization, computation and simulation technologies to support NASA Space Life Sciences research and the objectives of the Fundamental Biology Program. Research ranges from high resolution 3D cell imaging and structure analysis, virtual environment simulation of fine sensory-motor tasks, computational neuroscience and biophysics to biomedical/clinical applications. Computer simulation research focuses on the development of advanced computational tools for astronaut training and education. Virtual Reality (VR) and Virtual Environment (VE) simulation systems have become important training tools in many fields from flight simulation to, more recently, surgical simulation. The type and quality of training provided by these computer-based tools ranges widely, but the value of real-time VE computer simulation as a method of preparing individuals for real-world tasks is well established. Astronauts routinely use VE systems for various training tasks, including Space Shuttle landings, robot arm manipulations and extravehicular activities (space walks). Currently, there are no VE systems to train astronauts for basic and applied research experiments which are an important part of many missions. The Virtual Glovebox (VGX) is a prototype VE system for real-time physically-based simulation of the Life Sciences Glovebox where astronauts will perform many complex tasks supporting research experiments aboard the International Space Station. The VGX consists of a physical display system utilizing duel LCD projectors and circular polarization to produce a desktop-sized 3D virtual workspace. Physically-based modeling tools (Arachi Inc.) provide real-time collision detection, rigid body dynamics, physical properties and force-based controls for objects. The human-computer interface consists of two magnetic tracking devices (Ascention Inc.) attached to instrumented gloves (Immersion Inc.) which co-locate the user's hands with hand/forearm representations in the virtual workspace. Force-feedback is possible in a work volume defined by a Phantom Desktop device (SensAble inc.). Graphics are written in OpenGL. The system runs on a 2.2 GHz Pentium 4 PC. The prototype VGX provides astronauts and support personnel with a real-time physically-based VE system to simulate basic research tasks both on Earth and in the microgravity of Space. The immersive virtual environment of the VGX also makes it a useful tool for virtual engineering applications including CAD development, procedure design and simulation of human-system systems in a desktop-sized work volume.

  12. MO-DE-BRA-03: TOPAS-edu: A Window Into the Stochastic World Through the TOPAS Tool for Particle Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perl, J; Villagomez-Bernabe, B; Currell, F

    2015-06-15

    Purpose: The stochastic nature of the subatomic world presents a challenge for physics education. Even experienced physicists can be amazed at the varied behavior of electrons, x-rays, protons, neutrons, ions and the any short-lived particles that make up the overall behavior of our accelerators, brachytherapy sources and medical imaging systems. The all-particle Monte Carlo particle transport tool, TOPAS Tool for Particle Simulation, originally developed for proton therapy research, has been repurposed into a physics teaching tool, TOPAS-edu. Methods: TOPAS-edu students set up simulated particle sources, collimators, scatterers, imagers and scoring setups by writing simple ASCII files (in the TOPAS Parametermore » Control System format). Students visualize geometry setups and particle trajectories in a variety of modes from OpenGL graphics to VRML 3D viewers to gif and PostScript image files. Results written to simple comma separated values files are imported by the student into their preferred data analysis tool. Students can vary random seeds or adjust parameters of physics processes to better understand the stochastic nature of subatomic physics. Results: TOPAS-edu has been successfully deployed as the centerpiece of a physics course for master’s students at Queen’s University Belfast. Tutorials developed there takes students through a step by step course on the basics of particle transport and interaction, scattering, Bremsstrahlung, etc. At each step in the course, students build simulated experimental setups and then analyze the simulated results. Lessons build one upon another so that a student might end up with a full simulation of a medical accelerator, a water-phantom or an imager. Conclusion: TOPAS-edu was well received by students. A second application of TOPAS-edu is currently in development at Zurich University of Applied Sciences, Switzerland. It is our eventual goal to make TOPAS-edu available free of charge to any non-profit organization, along with associated tutorial materials developed by the TOPAS-edu community. Work supported in part by the U.S. Department of Energy under contract number DE-AC02-76SF00515. B. Villagomez-Bernabe is supported by CONACyT (Mexican Council for Science and Technology) project 231844.« less

  13. Generation of 3D synthetic breast tissue

    NASA Astrophysics Data System (ADS)

    Elangovan, Premkumar; Dance, David R.; Young, Kenneth C.; Wells, Kevin

    2016-03-01

    Virtual clinical trials are an emergent approach for the rapid evaluation and comparison of various breast imaging technologies and techniques using computer-based modeling tools. A fundamental requirement of this approach for mammography is the use of realistic looking breast anatomy in the studies to produce clinically relevant results. In this work, a biologically inspired approach has been used to simulate realistic synthetic breast phantom blocks for use in virtual clinical trials. A variety of high and low frequency features (including Cooper's ligaments, blood vessels and glandular tissue) have been extracted from clinical digital breast tomosynthesis images and used to simulate synthetic breast blocks. The appearance of the phantom blocks was validated by presenting a selection of simulated 2D and DBT images interleaved with real images to a team of experienced readers for rating using an ROC paradigm. The average areas under the curve for 2D and DBT images were 0.53+/-.04 and 0.55+/-.07 respectively; errors are the standard errors of the mean. The values indicate that the observers had difficulty in differentiating the real images from simulated images. The statistical properties of simulated images of the phantom blocks were evaluated by means of power spectrum analysis. The power spectrum curves for real and simulated images closely match and overlap indicating good agreement.

  14. [Accuracy Check of Monte Carlo Simulation in Particle Therapy Using Gel Dosimeters].

    PubMed

    Furuta, Takuya

    2017-01-01

    Gel dosimeters are a three-dimensional imaging tool for dose distribution induced by radiations. They can be used for accuracy check of Monte Carlo simulation in particle therapy. An application was reviewed in this article. An inhomogeneous biological sample placing a gel dosimeter behind it was irradiated by carbon beam. The recorded dose distribution in the gel dosimeter reflected the inhomogeneity of the biological sample. Monte Carlo simulation was conducted by reconstructing the biological sample from its CT image. The accuracy of the particle transport by Monte Carlo simulation was checked by comparing the dose distribution in the gel dosimeter between simulation and experiment.

  15. Optimising probe holder design for sentinel lymph node imaging using clinical photoacoustic system with Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Sivasubramanian, Kathyayini; Periyasamy, Vijitha; Wen, Kew Kok; Pramanik, Manojit

    2017-03-01

    Photoacoustic tomography is a hybrid imaging modality that combines optical and ultrasound imaging. It is rapidly gaining attention in the field of medical imaging. The challenge is to translate it into a clinical setup. In this work, we report the development of a handheld clinical photoacoustic imaging system. A clinical ultrasound imaging system is modified to integrate photoacoustic imaging with the ultrasound imaging. Hence, light delivery has been integrated with the ultrasound probe. The angle of light delivery is optimized in this work with respect to the depth of imaging. Optimization was performed based on Monte Carlo simulation for light transport in tissues. Based on the simulation results, the probe holders were fabricated using 3D printing. Similar results were obtained experimentally using phantoms. Phantoms were developed to mimic sentinel lymph node imaging scenario. Also, in vivo sentinel lymph node imaging was done using the same system with contrast agent methylene blue up to a depth of 1.5 cm. The results validate that one can use Monte Carlo simulation as a tool to optimize the probe holder design depending on the imaging needs. This eliminates a trial and error approach generally used for designing a probe holder.

  16. Processing infrared images of aircraft lapjoints

    NASA Technical Reports Server (NTRS)

    Syed, Hazari; Winfree, William P.; Cramer, K. E.

    1992-01-01

    Techniques for processing IR images of aging aircraft lapjoint data are discussed. Attention is given to a technique for detecting disbonds in aircraft lapjoints which clearly delineates the disbonded region from the bonded regions. The technique is weak on unpainted aircraft skin surfaces, but can be overridden by using a self-adhering contact sheet. Neural network analysis on raw temperature data has been shown to be an effective tool for visualization of images. Numerical simulation results show the above processing technique to be an effective tool in delineating the disbonds.

  17. Dynamic simulation of the effect of soft toric contact lenses movement on retinal image quality.

    PubMed

    Niu, Yafei; Sarver, Edwin J; Stevenson, Scott B; Marsack, Jason D; Parker, Katrina E; Applegate, Raymond A

    2008-04-01

    To report the development of a tool designed to dynamically simulate the effect of soft toric contact lens movement on retinal image quality, initial findings on three eyes, and the next steps to be taken to improve the utility of the tool. Three eyes of two subjects wearing soft toric contact lenses were cyclopleged with 1% cyclopentolate and 2.5% phenylephrine. Four hundred wavefront aberration measurements over a 5-mm pupil were recorded during soft contact lens wear at 30 Hz using a complete ophthalmic analysis system aberrometer. Each wavefront error measurement was input into Visual Optics Laboratory (version 7.15, Sarver and Associates, Inc.) to generate a retinal simulation of a high contrast log MAR visual acuity chart. The individual simulations were combined into a single dynamic movie using a custom MatLab PsychToolbox program. Visual acuity was measured for each eye reading the movie with best cycloplegic spectacle correction through a 3-mm artificial pupil to minimize the influence of the eyes' uncorrected aberrations. Comparison of the simulated acuity was made to values recorded while the subject read unaberrated charts with contact lenses through a 5-mm artificial pupil. For one study eye, average acuity was the same as the natural contact lens viewing condition. For the other two study eyes visual acuity of the best simulation was more than one line worse than natural viewing conditions. Dynamic simulation of retinal image quality, although not yet perfect, is a promising technique for visually illustrating the optical effects on image quality because of the movements of alignment-sensitive corrections.

  18. Development of computational small animal models and their applications in preclinical imaging and therapy research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Tianwu; Zaidi, Habib, E-mail: habib.zaidi@hcuge.ch; Geneva Neuroscience Center, Geneva University, Geneva CH-1205

    The development of multimodality preclinical imaging techniques and the rapid growth of realistic computer simulation tools have promoted the construction and application of computational laboratory animal models in preclinical research. Since the early 1990s, over 120 realistic computational animal models have been reported in the literature and used as surrogates to characterize the anatomy of actual animals for the simulation of preclinical studies involving the use of bioluminescence tomography, fluorescence molecular tomography, positron emission tomography, single-photon emission computed tomography, microcomputed tomography, magnetic resonance imaging, and optical imaging. Other applications include electromagnetic field simulation, ionizing and nonionizing radiation dosimetry, and themore » development and evaluation of new methodologies for multimodality image coregistration, segmentation, and reconstruction of small animal images. This paper provides a comprehensive review of the history and fundamental technologies used for the development of computational small animal models with a particular focus on their application in preclinical imaging as well as nonionizing and ionizing radiation dosimetry calculations. An overview of the overall process involved in the design of these models, including the fundamental elements used for the construction of different types of computational models, the identification of original anatomical data, the simulation tools used for solving various computational problems, and the applications of computational animal models in preclinical research. The authors also analyze the characteristics of categories of computational models (stylized, voxel-based, and boundary representation) and discuss the technical challenges faced at the present time as well as research needs in the future.« less

  19. Model-based surgical planning and simulation of cranial base surgery.

    PubMed

    Abe, M; Tabuchi, K; Goto, M; Uchino, A

    1998-11-01

    Plastic skull models of seven individual patients were fabricated by stereolithography from three-dimensional data based on computed tomography bone images. Skull models were utilized for neurosurgical planning and simulation in the seven patients with cranial base lesions that were difficult to remove. Surgical approaches and areas of craniotomy were evaluated using the fabricated skull models. In preoperative simulations, hand-made models of the tumors, major vessels and nerves were placed in the skull models. Step-by-step simulation of surgical procedures was performed using actual surgical tools. The advantages of using skull models to plan and simulate cranial base surgery include a better understanding of anatomic relationships, preoperative evaluation of the proposed procedure, increased understanding by the patient and family, and improved educational experiences for residents and other medical staff. The disadvantages of using skull models include the time and cost of making the models. The skull models provide a more realistic tool that is easier to handle than computer-graphic images. Surgical simulation using models facilitates difficult cranial base surgery and may help reduce surgical complications.

  20. X-ray EM simulation tool for ptychography dataset construction

    NASA Astrophysics Data System (ADS)

    Stoevelaar, L. Pjotr; Gerini, Giampiero

    2018-03-01

    In this paper, we present an electromagnetic full-wave modeling framework, as a support EM tool providing data sets for X-ray ptychographic imaging. Modeling the entire scattering problem with Finite Element Method (FEM) tools is, in fact, a prohibitive task, because of the large area illuminated by the beam (due to the poor focusing power at these wavelengths) and the very small features to be imaged. To overcome this problem, the spectrum of the illumination beam is decomposed into a discrete set of plane waves. This allows reducing the electromagnetic modeling volume to the one enclosing the area to be imaged. The total scattered field is reconstructed by superimposing the solutions for each plane wave illumination.

  1. An imaging-based stochastic model for simulation of tumour vasculature

    NASA Astrophysics Data System (ADS)

    Adhikarla, Vikram; Jeraj, Robert

    2012-10-01

    A mathematical model which reconstructs the structure of existing vasculature using patient-specific anatomical, functional and molecular imaging as input was developed. The vessel structure is modelled according to empirical vascular parameters, such as the mean vessel branching angle. The model is calibrated such that the resultant oxygen map modelled from the simulated microvasculature stochastically matches the input oxygen map to a high degree of accuracy (R2 ≈ 1). The calibrated model was successfully applied to preclinical imaging data. Starting from the anatomical vasculature image (obtained from contrast-enhanced computed tomography), a representative map of the complete vasculature was stochastically simulated as determined by the oxygen map (obtained from hypoxia [64Cu]Cu-ATSM positron emission tomography). The simulated microscopic vasculature and the calculated oxygenation map successfully represent the imaged hypoxia distribution (R2 = 0.94). The model elicits the parameters required to simulate vasculature consistent with imaging and provides a key mathematical relationship relating the vessel volume to the tissue oxygen tension. Apart from providing an excellent framework for visualizing the imaging gap between the microscopic and macroscopic imagings, the model has the potential to be extended as a tool to study the dynamics between the tumour and the vasculature in a patient-specific manner and has an application in the simulation of anti-angiogenic therapies.

  2. Software Tools for Battery Design | Transportation Research | NREL

    Science.gov Websites

    battery designers, developers, and manufacturers create affordable, high-performance lithium-ion (Li-ion Software Tools for Battery Design Software Tools for Battery Design Under the Computer-Aided ) batteries for next-generation electric-drive vehicles (EDVs). An image of a simulation of a battery pack

  3. Simulation for Teaching and Assessment of Nodule Perception on Chest Radiography in Nonradiology Health Care Trainees.

    PubMed

    Auffermann, William F; Henry, Travis S; Little, Brent P; Tigges, Stefan; Tridandapani, Srini

    2015-11-01

    Simulation has been used as an educational and assessment tool in several fields, generally involving training of physical skills. To date, simulation has found limited application in teaching and assessment of skills related to image perception and interpretation. The goal of this pilot study was to evaluate the feasibility of simulation as a tool for teaching and assessment of skills related to perception of nodules on chest radiography. This study received an exemption from the institutional review board. Subjects consisted of nonradiology health care trainees. Subjects underwent training and assessment of pulmonary nodule identification skills on chest radiographs at simulated radiology workstations. Subject performance was quantified by changes in area under the localization receiver operating characteristic curve. At the conclusion of the study, all subjects were given a questionnaire with five questions comparing learning at a simulated workstation with training using conventional materials. Statistical significance for questionnaire responses was tested using the Wilcoxon signed rank test. Subjects demonstrated statistically significant improvement in nodule identification after training at a simulated radiology workstation (change in area under the curve, 0.1079; P = .015). Subjects indicated that training on simulated radiology workstations was preferable to conventional training methods for all questions; P values for all questions were less than .01. Simulation may be a useful tool for teaching and assessment of skills related to medical image perception and interpretation. Further study is needed to determine which skills and trainee populations may be most amenable to training and assessment using simulation. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  4. Imaging Performance Analysis of Simbol-X with Simulations

    NASA Astrophysics Data System (ADS)

    Chauvin, M.; Roques, J. P.

    2009-05-01

    Simbol-X is an X-Ray telescope operating in formation flight. It means that its optical performances will strongly depend on the drift of the two spacecrafts and its ability to measure these drifts for image reconstruction. We built a dynamical ray tracing code to study the impact of these parameters on the optical performance of Simbol-X (see Chauvin et al., these proceedings). Using the simulation tool we have developed, we have conducted detailed analyses of the impact of different parameters on the imaging performance of the Simbol-X telescope.

  5. NVIDIA OptiX ray-tracing engine as a new tool for modelling medical imaging systems

    NASA Astrophysics Data System (ADS)

    Pietrzak, Jakub; Kacperski, Krzysztof; Cieślar, Marek

    2015-03-01

    The most accurate technique to model the X- and gamma radiation path through a numerically defined object is the Monte Carlo simulation which follows single photons according to their interaction probabilities. A simplified and much faster approach, which just integrates total interaction probabilities along selected paths, is known as ray tracing. Both techniques are used in medical imaging for simulating real imaging systems and as projectors required in iterative tomographic reconstruction algorithms. These approaches are ready for massive parallel implementation e.g. on Graphics Processing Units (GPU), which can greatly accelerate the computation time at a relatively low cost. In this paper we describe the application of the NVIDIA OptiX ray-tracing engine, popular in professional graphics and rendering applications, as a new powerful tool for X- and gamma ray-tracing in medical imaging. It allows the implementation of a variety of physical interactions of rays with pixel-, mesh- or nurbs-based objects, and recording any required quantities, like path integrals, interaction sites, deposited energies, and others. Using the OptiX engine we have implemented a code for rapid Monte Carlo simulations of Single Photon Emission Computed Tomography (SPECT) imaging, as well as the ray-tracing projector, which can be used in reconstruction algorithms. The engine generates efficient, scalable and optimized GPU code, ready to run on multi GPU heterogeneous systems. We have compared the results our simulations with the GATE package. With the OptiX engine the computation time of a Monte Carlo simulation can be reduced from days to minutes.

  6. Circuit design tool. User's manual, revision 2

    NASA Technical Reports Server (NTRS)

    Miyake, Keith M.; Smith, Donald E.

    1992-01-01

    The CAM chip design was produced in a UNIX software environment using a design tool that supports definition of digital electronic modules, composition of these modules into higher level circuits, and event-driven simulation of these circuits. Our design tool provides an interface whose goals include straightforward but flexible primitive module definition and circuit composition, efficient simulation, and a debugging environment that facilitates design verification and alteration. The tool provides a set of primitive modules which can be composed into higher level circuits. Each module is a C-language subroutine that uses a set of interface protocols understood by the design tool. Primitives can be altered simply by recoding their C-code image; in addition new primitives can be added allowing higher level circuits to be described in C-code rather than as a composition of primitive modules--this feature can greatly enhance the speed of simulation.

  7. Optimal design and uncertainty quantification in blood flow simulations for congenital heart disease

    NASA Astrophysics Data System (ADS)

    Marsden, Alison

    2009-11-01

    Recent work has demonstrated substantial progress in capabilities for patient-specific cardiovascular flow simulations. Recent advances include increasingly complex geometries, physiological flow conditions, and fluid structure interaction. However inputs to these simulations, including medical image data, catheter-derived pressures and material properties, can have significant uncertainties associated with them. For simulations to predict clinically useful and reliable output information, it is necessary to quantify the effects of input uncertainties on outputs of interest. In addition, blood flow simulation tools can now be efficiently coupled to shape optimization algorithms for surgery design applications, and these tools should incorporate uncertainty information. We present a unified framework to systematically and efficient account for uncertainties in simulations using adaptive stochastic collocation. In addition, we present a framework for derivative-free optimization of cardiovascular geometries, and layer these tools to perform optimization under uncertainty. These methods are demonstrated using simulations and surgery optimization to improve hemodynamics in pediatric cardiology applications.

  8. Photomask quality evaluation using lithography simulation and precision SEM image contour data

    NASA Astrophysics Data System (ADS)

    Murakawa, Tsutomu; Fukuda, Naoki; Shida, Soichi; Iwai, Toshimichi; Matsumoto, Jun; Nakamura, Takayuki; Hagiwara, Kazuyuki; Matsushita, Shohei; Hara, Daisuke; Adamov, Anthony

    2012-11-01

    To evaluate photomask quality, the current method uses spatial imaging by optical inspection tools. This technique at 1Xnm node has a resolution limit because small defects will be difficult to extract. To simulate the mask error-enhancement factor (MEEF) influence for aggressive OPC in 1Xnm node, wide FOV contour data and tone information are derived from high precision SEM images. For this purpose we have developed a new contour data extraction algorithm with sub-nanometer accuracy resulting in a wide Field of View (FOV) SEM image: (for example, more than 10um x 10um square). We evaluated MEEF influence of high-end photomask pattern using the wide FOV contour data of "E3630 MVM-SEMTM" and lithography simulator "TrueMaskTM DS" of D2S, Inc. As a result, we can detect the "invisible defect" as the MEEF influence using the wide FOV contour data and lithography simulator.

  9. Near-field diffraction from amplitude diffraction gratings: theory, simulation and results

    NASA Astrophysics Data System (ADS)

    Abedin, Kazi Monowar; Rahman, S. M. Mujibur

    2017-08-01

    We describe a computer simulation method by which the complete near-field diffract pattern of an amplitude diffraction grating can be generated. The technique uses the method of iterative Fresnel integrals to calculate and generate the diffraction images. Theoretical background as well as the techniques to perform the simulation is described. The program is written in MATLAB, and can be implemented in any ordinary PC. Examples of simulated diffraction images are presented and discussed. The generated images in the far-field where they reduce to Fraunhofer diffraction pattern are also presented for a realistic grating, and compared with the results predicted by the grating equation, which is applicable in the far-field. The method can be used as a tool to teach the complex phenomenon of diffraction in classrooms.

  10. A GPU Simulation Tool for Training and Optimisation in 2D Digital X-Ray Imaging.

    PubMed

    Gallio, Elena; Rampado, Osvaldo; Gianaria, Elena; Bianchi, Silvio Diego; Ropolo, Roberto

    2015-01-01

    Conventional radiology is performed by means of digital detectors, with various types of technology and different performance in terms of efficiency and image quality. Following the arrival of a new digital detector in a radiology department, all the staff involved should adapt the procedure parameters to the properties of the detector, in order to achieve an optimal result in terms of correct diagnostic information and minimum radiation risks for the patient. The aim of this study was to develop and validate a software capable of simulating a digital X-ray imaging system, using graphics processing unit computing. All radiological image components were implemented in this application: an X-ray tube with primary beam, a virtual patient, noise, scatter radiation, a grid and a digital detector. Three different digital detectors (two digital radiography and a computed radiography systems) were implemented. In order to validate the software, we carried out a quantitative comparison of geometrical and anthropomorphic phantom simulated images with those acquired. In terms of average pixel values, the maximum differences were below 15%, while the noise values were in agreement with a maximum difference of 20%. The relative trends of contrast to noise ratio versus beam energy and intensity were well simulated. Total calculation times were below 3 seconds for clinical images with pixel size of actual dimensions less than 0.2 mm. The application proved to be efficient and realistic. Short calculation times and the accuracy of the results obtained make this software a useful tool for training operators and dose optimisation studies.

  11. Grayscale Optical Correlator Workbench

    NASA Technical Reports Server (NTRS)

    Hanan, Jay; Zhou, Hanying; Chao, Tien-Hsin

    2006-01-01

    Grayscale Optical Correlator Workbench (GOCWB) is a computer program for use in automatic target recognition (ATR). GOCWB performs ATR with an accurate simulation of a hardware grayscale optical correlator (GOC). This simulation is performed to test filters that are created in GOCWB. Thus, GOCWB can be used as a stand-alone ATR software tool or in combination with GOC hardware for building (target training), testing, and optimization of filters. The software is divided into three main parts, denoted filter, testing, and training. The training part is used for assembling training images as input to a filter. The filter part is used for combining training images into a filter and optimizing that filter. The testing part is used for testing new filters and for general simulation of GOC output. The current version of GOCWB relies on the mathematical software tools from MATLAB binaries for performing matrix operations and fast Fourier transforms. Optimization of filters is based on an algorithm, known as OT-MACH, in which variables specified by the user are parameterized and the best filter is selected on the basis of an average result for correct identification of targets in multiple test images.

  12. Potential pitfalls of strain rate imaging: angle dependency

    NASA Technical Reports Server (NTRS)

    Castro, P. L.; Greenberg, N. L.; Drinko, J.; Garcia, M. J.; Thomas, J. D.

    2000-01-01

    Strain Rate Imaging (SRI) is a new echocardiographic technique that allows for the real-time determination of myocardial SR, which may be used for the early and accurate detection of coronary artery disease. We sought to study whether SR is affected by scan line alignment in a computer simulation and an in vivo experiment. Through the computer simulation and the in vivo experiment we generated and validated safe scanning sectors within the ultrasound scan sector and showed that while SRI will be an extremely valuable tool in detecting coronary artery disease there are potential pitfalls for the unwary clinician. Only after accounting for these affects due to angle dependency, can clinicians utilize SRI's potential as a valuable tool in detecting coronary artery disease.

  13. Creation of an ensemble of simulated cardiac cases and a human observer study: tools for the development of numerical observers for SPECT myocardial perfusion imaging

    NASA Astrophysics Data System (ADS)

    O'Connor, J. Michael; Pretorius, P. Hendrik; Gifford, Howard C.; Licho, Robert; Joffe, Samuel; McGuiness, Matthew; Mehurg, Shannon; Zacharias, Michael; Brankov, Jovan G.

    2012-02-01

    Our previous Single Photon Emission Computed Tomography (SPECT) myocardial perfusion imaging (MPI) research explored the utility of numerical observers. We recently created two hundred and eighty simulated SPECT cardiac cases using Dynamic MCAT (DMCAT) and SIMIND Monte Carlo tools. All simulated cases were then processed with two reconstruction methods: iterative ordered subset expectation maximization (OSEM) and filtered back-projection (FBP). Observer study sets were assembled for both OSEM and FBP methods. Five physicians performed an observer study on one hundred and seventy-nine images from the simulated cases. The observer task was to indicate detection of any myocardial perfusion defect using the American Society of Nuclear Cardiology (ASNC) 17-segment cardiac model and the ASNC five-scale rating guidelines. Human observer Receiver Operating Characteristic (ROC) studies established the guidelines for the subsequent evaluation of numerical model observer (NO) performance. Several NOs were formulated and their performance was compared with the human observer performance. One type of NO was based on evaluation of a cardiac polar map that had been pre-processed using a gradient-magnitude watershed segmentation algorithm. The second type of NO was also based on analysis of a cardiac polar map but with use of a priori calculated average image derived from an ensemble of normal cases.

  14. Experimental evaluation of x-ray acoustic computed tomography for radiotherapy dosimetry applications.

    PubMed

    Hickling, Susannah; Lei, Hao; Hobson, Maritza; Léger, Pierre; Wang, Xueding; El Naqa, Issam

    2017-02-01

    The aim of this work was to experimentally demonstrate the feasibility of x-ray acoustic computed tomography (XACT) as a dosimetry tool in a clinical radiotherapy environment. The acoustic waves induced following a single pulse of linear accelerator irradiation in a water tank were detected with an immersion ultrasound transducer. By rotating the collimator and keeping the transducer stationary, acoustic signals at varying angles surrounding the field were detected and reconstructed to form an XACT image. Simulated XACT images were obtained using a previously developed simulation workflow. Profiles extracted from experimental and simulated XACT images were compared to profiles measured with an ion chamber. A variety of radiation field sizes and shapes were investigated. XACT images resembling the geometry of the delivered radiation field were obtained for fields ranging from simple squares to more complex shapes. When comparing profiles extracted from simulated and experimental XACT images of a 4 cm × 4 cm field, 97% of points were found to pass a 3%/3 mm gamma test. Agreement between simulated and experimental XACT images worsened when comparing fields with fine details. Profiles extracted from experimental XACT images were compared to profiles obtained through clinical ion chamber measurements, confirming that the intensity of XACT images is related to deposited radiation dose. Seventy-seven percent of the points in a profile extracted from an experimental XACT image of a 4 cm × 4 cm field passed a 7%/4 mm gamma test when compared to an ion chamber measured profile. In a complicated puzzle-piece shaped field, 86% of the points in an XACT extracted profile passed a 7%/4 mm gamma test. XACT images with intensity related to the spatial distribution of deposited dose in a water tank were formed for a variety of field sizes and shapes. XACT has the potential to be a useful tool for absolute, relative and in vivo dosimetry. © 2016 American Association of Physicists in Medicine.

  15. Design and validation of a mathematical breast phantom for contrast-enhanced digital mammography

    NASA Astrophysics Data System (ADS)

    Hill, Melissa L.; Mainprize, James G.; Jong, Roberta A.; Yaffe, Martin J.

    2011-03-01

    In contrast-enhanced digital mammography (CEDM) an iodinated contrast agent is employed to increase lesion contrast and to provide tissue functional information. Here, we present the details of a software phantom that can be used as a tool for the simulation of CEDM images, and compare the degree of anatomic noise present in images simulated using the phantom to that associated with breast parenchyma in clinical CEDM images. Such a phantom could be useful for multiparametric investigations including characterization of CEDM imaging performance and system optimization. The phantom has a realistic mammographic appearance based on a clustered lumpy background and models contrast agent uptake according to breast tissue physiology. Fifty unique phantoms were generated and used to simulate regions of interest (ROI) of pre-contrast images and logarithmically subtracted CEDM images using monoenergetic ray tracing. Power law exponents, β, were used as a measure of anatomic noise and were determined using a linear least-squares fit to log-log plots of the square of the modulus of radially averaged image power spectra versus spatial frequency. The power spectra for ROI selected from regions of normal parenchyma in 10 pairs of clinical CEDM pre-contrast and subtracted images were also measured for comparison with the simulated images. There was good agreement between the measured β in the simulated CEDM images and the clinical images. The values of β were consistently lower for the logarithmically subtracted CEDM images compared to the pre-contrast images, indicating that the subtraction process reduced anatomical noise.

  16. Onboard utilization of ground control points for image correction. Volume 3: Ground control point simulation software design

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The software developed to simulate the ground control point navigation system is described. The Ground Control Point Simulation Program (GCPSIM) is designed as an analysis tool to predict the performance of the navigation system. The system consists of two star trackers, a global positioning system receiver, a gyro package, and a landmark tracker.

  17. Teaching strategies for using projected images to develop conceptual understanding: Exploring discussion practices in computer simulation and static image-based lessons

    NASA Astrophysics Data System (ADS)

    Price, Norman T.

    The availability and sophistication of visual display images, such as simulations, for use in science classrooms has increased exponentially however, it can be difficult for teachers to use these images to encourage and engage active student thinking. There is a need to describe flexible discussion strategies that use visual media to engage active thinking. This mixed methods study analyzes teacher behavior in lessons using visual media about the particulate model of matter that were taught by three experienced middle school teachers. Each teacher taught one half of their students with lessons using static overheads and taught the other half with lessons using a projected dynamic simulation. The quantitative analysis of pre-post data found significant gain differences between the two image mode conditions, suggesting that the students who were assigned to the simulation condition learned more than students who were assigned to the overhead condition. Open coding was used to identify a set of eight image-based teaching strategies that teachers were using with visual displays. Fixed codes for this set of image-based discussion strategies were then developed and used to analyze video and transcripts of whole class discussions from 12 lessons. The image-based discussion strategies were refined over time in a set of three in-depth 2x2 comparative case studies of two teachers teaching one lesson topic with two image display modes. The comparative case study data suggest that the simulation mode may have offered greater affordances than the overhead mode for planning and enacting discussions. The 12 discussions were also coded for overall teacher student interaction patterns, such as presentation, IRE, and IRF. When teachers moved during a lesson from using no image to using either image mode, some teachers were observed asking more questions when the image was displayed while others asked many fewer questions. The changes in teacher student interaction patterns suggest that teachers vary on whether they consider the displayed image as a "tool-for-telling" and a "tool-for-asking." The study attempts to provide new descriptions of strategies teachers use to orchestrate image-based discussions designed to promote student engagement and reasoning in lessons with conceptual goals.

  18. GPU-Based Simulation of Ultrasound Imaging Artifacts for Cryosurgery Training.

    PubMed

    Keelan, Robert; Shimada, Kenji; Rabin, Yoed

    2017-02-01

    This study presents an efficient computational technique for the simulation of ultrasound imaging artifacts associated with cryosurgery based on nonlinear ray tracing. This study is part of an ongoing effort to develop computerized training tools for cryosurgery, with prostate cryosurgery as a development model. The capability of performing virtual cryosurgical procedures on a variety of test cases is essential for effective surgical training. Simulated ultrasound imaging artifacts include reverberation and reflection of the cryoprobes in the unfrozen tissue, reflections caused by the freezing front, shadowing caused by the frozen region, and tissue property changes in repeated freeze-thaw cycles procedures. The simulated artifacts appear to preserve the key features observed in a clinical setting. This study displays an example of how training may benefit from toggling between the undisturbed ultrasound image, the simulated temperature field, the simulated imaging artifacts, and an augmented hybrid presentation of the temperature field superimposed on the ultrasound image. The proposed method is demonstrated on a graphic processing unit at 100 frames per second, on a mid-range personal workstation, at two orders of magnitude faster than a typical cryoprocedure. This performance is based on computation with C++ accelerated massive parallelism and its interoperability with the DirectX-rendering application programming interface.

  19. GPU-Based Simulation of Ultrasound Imaging Artifacts for Cryosurgery Training

    PubMed Central

    Keelan, Robert; Shimada, Kenji

    2016-01-01

    This study presents an efficient computational technique for the simulation of ultrasound imaging artifacts associated with cryosurgery based on nonlinear ray tracing. This study is part of an ongoing effort to develop computerized training tools for cryosurgery, with prostate cryosurgery as a development model. The capability of performing virtual cryosurgical procedures on a variety of test cases is essential for effective surgical training. Simulated ultrasound imaging artifacts include reverberation and reflection of the cryoprobes in the unfrozen tissue, reflections caused by the freezing front, shadowing caused by the frozen region, and tissue property changes in repeated freeze–thaw cycles procedures. The simulated artifacts appear to preserve the key features observed in a clinical setting. This study displays an example of how training may benefit from toggling between the undisturbed ultrasound image, the simulated temperature field, the simulated imaging artifacts, and an augmented hybrid presentation of the temperature field superimposed on the ultrasound image. The proposed method is demonstrated on a graphic processing unit at 100 frames per second, on a mid-range personal workstation, at two orders of magnitude faster than a typical cryoprocedure. This performance is based on computation with C++ accelerated massive parallelism and its interoperability with the DirectX-rendering application programming interface. PMID:26818026

  20. A Quantitative Three-Dimensional Image Analysis Tool for Maximal Acquisition of Spatial Heterogeneity Data.

    PubMed

    Allenby, Mark C; Misener, Ruth; Panoskaltsis, Nicki; Mantalaris, Athanasios

    2017-02-01

    Three-dimensional (3D) imaging techniques provide spatial insight into environmental and cellular interactions and are implemented in various fields, including tissue engineering, but have been restricted by limited quantification tools that misrepresent or underutilize the cellular phenomena captured. This study develops image postprocessing algorithms pairing complex Euclidean metrics with Monte Carlo simulations to quantitatively assess cell and microenvironment spatial distributions while utilizing, for the first time, the entire 3D image captured. Although current methods only analyze a central fraction of presented confocal microscopy images, the proposed algorithms can utilize 210% more cells to calculate 3D spatial distributions that can span a 23-fold longer distance. These algorithms seek to leverage the high sample cost of 3D tissue imaging techniques by extracting maximal quantitative data throughout the captured image.

  1. Comparison of simulated and measured spectra from an X-ray tube for the energies between 20 and 35 keV

    NASA Astrophysics Data System (ADS)

    Yücel, M.; Emirhan, E.; Bayrak, A.; Ozben, C. S.; Yücel, E. Barlas

    2015-11-01

    Design and production of a simple and low cost X-ray imaging system that can be used for light industrial applications was targeted in the Nuclear Physics Laboratory of Istanbul Technical University. In this study, production, transmission and detection of X-rays were simulated for the proposed imaging device. OX/70-P dental tube was used and X-ray spectra simulated by Geant4 were validated by comparison with X-ray spectra measured between 20 and 35 keV. Relative detection efficiency of the detector was also determined to confirm the physics processes used in the simulations. Various time optimization tools were performed to reduce the simulation time.

  2. Portfolio: a prototype workstation for development and evaluation of tools for analysis and management of digital portal images.

    PubMed

    Boxwala, A A; Chaney, E L; Fritsch, D S; Friedman, C P; Rosenman, J G

    1998-09-01

    The purpose of this investigation was to design and implement a prototype physician workstation, called PortFolio, as a platform for developing and evaluating, by means of controlled observer studies, user interfaces and interactive tools for analyzing and managing digital portal images. The first observer study was designed to measure physician acceptance of workstation technology, as an alternative to a view box, for inspection and analysis of portal images for detection of treatment setup errors. The observer study was conducted in a controlled experimental setting to evaluate physician acceptance of the prototype workstation technology exemplified by PortFolio. PortFolio incorporates a windows user interface, a compact kit of carefully selected image analysis tools, and an object-oriented data base infrastructure. The kit evaluated in the observer study included tools for contrast enhancement, registration, and multimodal image visualization. Acceptance was measured in the context of performing portal image analysis in a structured protocol designed to simulate clinical practice. The acceptability and usage patterns were measured from semistructured questionnaires and logs of user interactions. Radiation oncologists, the subjects for this study, perceived the tools in PortFolio to be acceptable clinical aids. Concerns were expressed regarding user efficiency, particularly with respect to the image registration tools. The results of our observer study indicate that workstation technology is acceptable to radiation oncologists as an alternative to a view box for clinical detection of setup errors from digital portal images. Improvements in implementation, including more tools and a greater degree of automation in the image analysis tasks, are needed to make PortFolio more clinically practical.

  3. Research and Analysis of Image Processing Technologies Based on DotNet Framework

    NASA Astrophysics Data System (ADS)

    Ya-Lin, Song; Chen-Xi, Bai

    Microsoft.Net is a kind of most popular program development tool. This paper gave a detailed analysis concluded about some image processing technologies of the advantages and disadvantages by .Net processed image while the same algorithm is used in Programming experiments. The result shows that the two best efficient methods are unsafe pointer and Direct 3D, and Direct 3D used to 3D simulation development, and the others are useful in some fields while these technologies are poor efficiency and not suited to real-time processing. The experiment results in paper will help some projects about image processing and simulation based DotNet and it has strong practicability.

  4. From printed color to image appearance: tool for advertising assessment

    NASA Astrophysics Data System (ADS)

    Bonanomi, Cristian; Marini, Daniele; Rizzi, Alessandro

    2012-07-01

    We present a methodology to calculate the color appearance of advertising billboards set in indoor and outdoor environments, printed on different types of paper support and viewed under different illuminations. The aim is to simulate the visual appearance of an image printed on a specific support, observed in a certain context and illuminated with a specific source of light. Knowing in advance the visual rendering of an image in different conditions can avoid problems related to its visualization. The proposed method applies a sequence of transformations to convert a four channels image (CMYK) into a spectral one, considering the paper support, then it simulates the chosen illumination, and finally computes an estimation of the appearance.

  5. MO-FG-209-05: Towards a Feature-Based Anthropomorphic Model Observer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avanaki, A.

    2016-06-15

    This symposium will review recent advances in the simulation methods for evaluation of novel breast imaging systems – the subject of AAPM Task Group TG234. Our focus will be on the various approaches to development and validation of software anthropomorphic phantoms and their use in the statistical assessment of novel imaging systems using such phantoms along with computational models for the x-ray image formation process. Due to the dynamic development and complex design of modern medical imaging systems, the simulation of anatomical structures, image acquisition modalities, and the image perception and analysis offers substantial benefits of reduced cost, duration, andmore » radiation exposure, as well as the known ground-truth and wide variability in simulated anatomies. For these reasons, Virtual Clinical Trials (VCTs) have been increasingly accepted as a viable tool for preclinical assessment of x-ray and other breast imaging methods. Activities of TG234 have encompassed the optimization of protocols for simulation studies, including phantom specifications, the simulated data representation, models of the imaging process, and statistical assessment of simulated images. The symposium will discuss the state-of-the-science of VCTs for novel breast imaging systems, emphasizing recent developments and future directions. Presentations will discuss virtual phantoms for intermodality breast imaging performance comparisons, extension of the breast anatomy simulation to the cellular level, optimized integration of the simulated imaging chain, and the novel directions in the observer models design. Learning Objectives: Review novel results in developing and applying virtual phantoms for inter-modality breast imaging performance comparisons; Discuss the efforts to extend the computer simulation of breast anatomy and pathology to the cellular level; Summarize the state of the science in optimized integration of modules in the simulated imaging chain; Compare novel directions in the design of observer models for task based validation of imaging systems. PB: Research funding support from the NIH, NSF, and Komen for the Cure; NIH funded collaboration with Barco, Inc. and Hologic, Inc.; Consultant to Delaware State Univ. and NCCPM, UK. AA: Employed at Barco Healthcare.; P. Bakic, NIH: (NIGMS P20 #GM103446, NCI R01 #CA154444); M. Das, NIH Research grants.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graff, C.

    This symposium will review recent advances in the simulation methods for evaluation of novel breast imaging systems – the subject of AAPM Task Group TG234. Our focus will be on the various approaches to development and validation of software anthropomorphic phantoms and their use in the statistical assessment of novel imaging systems using such phantoms along with computational models for the x-ray image formation process. Due to the dynamic development and complex design of modern medical imaging systems, the simulation of anatomical structures, image acquisition modalities, and the image perception and analysis offers substantial benefits of reduced cost, duration, andmore » radiation exposure, as well as the known ground-truth and wide variability in simulated anatomies. For these reasons, Virtual Clinical Trials (VCTs) have been increasingly accepted as a viable tool for preclinical assessment of x-ray and other breast imaging methods. Activities of TG234 have encompassed the optimization of protocols for simulation studies, including phantom specifications, the simulated data representation, models of the imaging process, and statistical assessment of simulated images. The symposium will discuss the state-of-the-science of VCTs for novel breast imaging systems, emphasizing recent developments and future directions. Presentations will discuss virtual phantoms for intermodality breast imaging performance comparisons, extension of the breast anatomy simulation to the cellular level, optimized integration of the simulated imaging chain, and the novel directions in the observer models design. Learning Objectives: Review novel results in developing and applying virtual phantoms for inter-modality breast imaging performance comparisons; Discuss the efforts to extend the computer simulation of breast anatomy and pathology to the cellular level; Summarize the state of the science in optimized integration of modules in the simulated imaging chain; Compare novel directions in the design of observer models for task based validation of imaging systems. PB: Research funding support from the NIH, NSF, and Komen for the Cure; NIH funded collaboration with Barco, Inc. and Hologic, Inc.; Consultant to Delaware State Univ. and NCCPM, UK. AA: Employed at Barco Healthcare.; P. Bakic, NIH: (NIGMS P20 #GM103446, NCI R01 #CA154444); M. Das, NIH Research grants.« less

  7. Feasibility assessment of the interactive use of a Monte Carlo algorithm in treatment planning for intraoperative electron radiation therapy

    NASA Astrophysics Data System (ADS)

    Guerra, Pedro; Udías, José M.; Herranz, Elena; Santos-Miranda, Juan Antonio; Herraiz, Joaquín L.; Valdivieso, Manlio F.; Rodríguez, Raúl; Calama, Juan A.; Pascau, Javier; Calvo, Felipe A.; Illana, Carlos; Ledesma-Carbayo, María J.; Santos, Andrés

    2014-12-01

    This work analysed the feasibility of using a fast, customized Monte Carlo (MC) method to perform accurate computation of dose distributions during pre- and intraplanning of intraoperative electron radiation therapy (IOERT) procedures. The MC method that was implemented, which has been integrated into a specific innovative simulation and planning tool, is able to simulate the fate of thousands of particles per second, and it was the aim of this work to determine the level of interactivity that could be achieved. The planning workflow enabled calibration of the imaging and treatment equipment, as well as manipulation of the surgical frame and insertion of the protection shields around the organs at risk and other beam modifiers. In this way, the multidisciplinary team involved in IOERT has all the tools necessary to perform complex MC dosage simulations adapted to their equipment in an efficient and transparent way. To assess the accuracy and reliability of this MC technique, dose distributions for a monoenergetic source were compared with those obtained using a general-purpose software package used widely in medical physics applications. Once accuracy of the underlying simulator was confirmed, a clinical accelerator was modelled and experimental measurements in water were conducted. A comparison was made with the output from the simulator to identify the conditions under which accurate dose estimations could be obtained in less than 3 min, which is the threshold imposed to allow for interactive use of the tool in treatment planning. Finally, a clinically relevant scenario, namely early-stage breast cancer treatment, was simulated with pre- and intraoperative volumes to verify that it was feasible to use the MC tool intraoperatively and to adjust dose delivery based on the simulation output, without compromising accuracy. The workflow provided a satisfactory model of the treatment head and the imaging system, enabling proper configuration of the treatment planning system and providing good accuracy in the dosage simulation.

  8. Oxygen octahedra picker: A software tool to extract quantitative information from STEM images.

    PubMed

    Wang, Yi; Salzberger, Ute; Sigle, Wilfried; Eren Suyolcu, Y; van Aken, Peter A

    2016-09-01

    In perovskite oxide based materials and hetero-structures there are often strong correlations between oxygen octahedral distortions and functionality. Thus, atomistic understanding of the octahedral distortion, which requires accurate measurements of atomic column positions, will greatly help to engineer their properties. Here, we report the development of a software tool to extract quantitative information of the lattice and of BO6 octahedral distortions from STEM images. Center-of-mass and 2D Gaussian fitting methods are implemented to locate positions of individual atom columns. The precision of atomic column distance measurements is evaluated on both simulated and experimental images. The application of the software tool is demonstrated using practical examples. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  9. XCAT/DRASIM: a realistic CT/human-model simulation package

    NASA Astrophysics Data System (ADS)

    Fung, George S. K.; Stierstorfer, Karl; Segars, W. Paul; Taguchi, Katsuyuki; Flohr, Thomas G.; Tsui, Benjamin M. W.

    2011-03-01

    The aim of this research is to develop a complete CT/human-model simulation package by integrating the 4D eXtended CArdiac-Torso (XCAT) phantom, a computer generated NURBS surface based phantom that provides a realistic model of human anatomy and respiratory and cardiac motions, and the DRASIM (Siemens Healthcare) CT-data simulation program. Unlike other CT simulation tools which are based on simple mathematical primitives or voxelized phantoms, this new simulation package has the advantages of utilizing a realistic model of human anatomy and physiological motions without voxelization and with accurate modeling of the characteristics of clinical Siemens CT systems. First, we incorporated the 4D XCAT anatomy and motion models into DRASIM by implementing a new library which consists of functions to read-in the NURBS surfaces of anatomical objects and their overlapping order and material properties in the XCAT phantom. Second, we incorporated an efficient ray-tracing algorithm for line integral calculation in DRASIM by computing the intersection points of the rays cast from the x-ray source to the detector elements through the NURBS surfaces of the multiple XCAT anatomical objects along the ray paths. Third, we evaluated the integrated simulation package by performing a number of sample simulations of multiple x-ray projections from different views followed by image reconstruction. The initial simulation results were found to be promising by qualitative evaluation. In conclusion, we have developed a unique CT/human-model simulation package which has great potential as a tool in the design and optimization of CT scanners, and the development of scanning protocols and image reconstruction methods for improving CT image quality and reducing radiation dose.

  10. BlochSolver: A GPU-optimized fast 3D MRI simulator for experimentally compatible pulse sequences

    NASA Astrophysics Data System (ADS)

    Kose, Ryoichi; Kose, Katsumi

    2017-08-01

    A magnetic resonance imaging (MRI) simulator, which reproduces MRI experiments using computers, has been developed using two graphic-processor-unit (GPU) boards (GTX 1080). The MRI simulator was developed to run according to pulse sequences used in experiments. Experiments and simulations were performed to demonstrate the usefulness of the MRI simulator for three types of pulse sequences, namely, three-dimensional (3D) gradient-echo, 3D radio-frequency spoiled gradient-echo, and gradient-echo multislice with practical matrix sizes. The results demonstrated that the calculation speed using two GPU boards was typically about 7 TFLOPS and about 14 times faster than the calculation speed using CPUs (two 18-core Xeons). We also found that MR images acquired by experiment could be reproduced using an appropriate number of subvoxels, and that 3D isotropic and two-dimensional multislice imaging experiments for practical matrix sizes could be simulated using the MRI simulator. Therefore, we concluded that such powerful MRI simulators are expected to become an indispensable tool for MRI research and development.

  11. Hyperspectral imaging simulation of object under sea-sky background

    NASA Astrophysics Data System (ADS)

    Wang, Biao; Lin, Jia-xuan; Gao, Wei; Yue, Hui

    2016-10-01

    Remote sensing image simulation plays an important role in spaceborne/airborne load demonstration and algorithm development. Hyperspectral imaging is valuable in marine monitoring, search and rescue. On the demand of spectral imaging of objects under the complex sea scene, physics based simulation method of spectral image of object under sea scene is proposed. On the development of an imaging simulation model considering object, background, atmosphere conditions, sensor, it is able to examine the influence of wind speed, atmosphere conditions and other environment factors change on spectral image quality under complex sea scene. Firstly, the sea scattering model is established based on the Philips sea spectral model, the rough surface scattering theory and the water volume scattering characteristics. The measured bi directional reflectance distribution function (BRDF) data of objects is fit to the statistical model. MODTRAN software is used to obtain solar illumination on the sea, sky brightness, the atmosphere transmittance from sea to sensor and atmosphere backscattered radiance, and Monte Carlo ray tracing method is used to calculate the sea surface object composite scattering and spectral image. Finally, the object spectrum is acquired by the space transformation, radiation degradation and adding the noise. The model connects the spectrum image with the environmental parameters, the object parameters, and the sensor parameters, which provide a tool for the load demonstration and algorithm development.

  12. Creation and Validation of a Simulator for Neonatal Brain Ultrasonography: A Pilot Study.

    PubMed

    Tsai, Andy; Barnewolt, Carol E; Prahbu, Sanjay P; Yonekura, Reimi; Hosmer, Andrew; Schulz, Noah E; Weinstock, Peter H

    2017-01-01

    Historically, skills training in performing brain ultrasonography has been limited to hours of scanning infants for lack of adequate synthetic models or alternatives. The aim of this study was to create a simulator and determine its utility as an educational tool in teaching the skills that can be used in performing brain ultrasonography on infants. A brain ultrasonography simulator was created using a combination of multi-modality imaging, three-dimensional printing, material and acoustic engineering, and sculpting and molding. Radiology residents participated prior to their pediatric rotation. The study included (1) an initial questionnaire and resident creation of three coronal images using the simulator; (2) brain ultrasonography lecture; (3) hands-on simulator practice; and (4) a follow-up questionnaire and re-creation of the same three coronal images on the simulator. A blinded radiologist scored the quality of the pre- and post-training images using metrics including symmetry of the images and inclusion of predetermined landmarks. Wilcoxon rank-sum test was used to compare pre- and post-training questionnaire rankings and image quality scores. Ten residents participated in the study. Analysis of pre- and post-training rankings showed improvements in technical knowledge and confidence, and reduction in anxiety in performing brain ultrasonography. Objective measures of image quality likewise improved. Mean reported value score for simulator training was high across participants who reported perceived improvements in scanning skills and enjoyment from simulator use, with interest in additional practice on the simulator and recommendations for its use. This pilot study supports the use of a simulator in teaching radiology residents the skills that can be used to perform brain ultrasonography. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  13. Simulation tools for two-dimensional experiments in x-ray computed tomography using the FORBILD head phantom

    PubMed Central

    Yu, Zhicong; Noo, Frédéric; Dennerlein, Frank; Wunderlich, Adam; Lauritsch, Günter; Hornegger, Joachim

    2012-01-01

    Mathematical phantoms are essential for the development and early-stage evaluation of image reconstruction algorithms in x-ray computed tomography (CT). This note offers tools for computer simulations using a two-dimensional (2D) phantom that models the central axial slice through the FORBILD head phantom. Introduced in 1999, in response to a need for a more robust test, the FORBILD head phantom is now seen by many as the gold standard. However, the simple Shepp-Logan phantom is still heavily used by researchers working on 2D image reconstruction. Universal acceptance of the FORBILD head phantom may have been prevented by its significantly-higher complexity: software that allows computer simulations with the Shepp-Logan phantom is not readily applicable to the FORBILD head phantom. The tools offered here address this problem. They are designed for use with Matlab®, as well as open-source variants, such as FreeMat and Octave, which are all widely used in both academia and industry. To get started, the interested user can simply copy and paste the codes from this PDF document into Matlab® M-files. PMID:22713335

  14. Simulation tools for two-dimensional experiments in x-ray computed tomography using the FORBILD head phantom.

    PubMed

    Yu, Zhicong; Noo, Frédéric; Dennerlein, Frank; Wunderlich, Adam; Lauritsch, Günter; Hornegger, Joachim

    2012-07-07

    Mathematical phantoms are essential for the development and early stage evaluation of image reconstruction algorithms in x-ray computed tomography (CT). This note offers tools for computer simulations using a two-dimensional (2D) phantom that models the central axial slice through the FORBILD head phantom. Introduced in 1999, in response to a need for a more robust test, the FORBILD head phantom is now seen by many as the gold standard. However, the simple Shepp-Logan phantom is still heavily used by researchers working on 2D image reconstruction. Universal acceptance of the FORBILD head phantom may have been prevented by its significantly higher complexity: software that allows computer simulations with the Shepp-Logan phantom is not readily applicable to the FORBILD head phantom. The tools offered here address this problem. They are designed for use with Matlab®, as well as open-source variants, such as FreeMat and Octave, which are all widely used in both academia and industry. To get started, the interested user can simply copy and paste the codes from this PDF document into Matlab® M-files.

  15. Modeling the Transfer Function for the Dark Energy Survey

    DOE PAGES

    Chang, C.

    2015-03-04

    We present a forward-modeling simulation framework designed to model the data products from the Dark Energy Survey (DES). This forward-model process can be thought of as a transfer function—a mapping from cosmological/astronomical signals to the final data products used by the scientists. Using output from the cosmological simulations (the Blind Cosmology Challenge), we generate simulated images (the Ultra Fast Image Simulator) and catalogs representative of the DES data. In this work we demonstrate the framework by simulating the 244 deg 2 coadd images and catalogs in five bands for the DES Science Verification data. The simulation output is compared withmore » the corresponding data to show that major characteristics of the images and catalogs can be captured. We also point out several directions of future improvements. Two practical examples—star-galaxy classification and proximity effects on object detection—are then used to illustrate how one can use the simulations to address systematics issues in data analysis. With clear understanding of the simplifications in our model, we show that one can use the simulations side-by-side with data products to interpret the measurements. This forward modeling approach is generally applicable for other upcoming and future surveys. It provides a powerful tool for systematics studies that is sufficiently realistic and highly controllable.« less

  16. Assessment of contrast enhanced respiration managed cone-beam CT for image guided radiotherapy of intrahepatic tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Nikolaj K. G., E-mail: nkyj@regionsjaelland.dk; Stewart, Errol; Imaging Research Lab, Robarts Research Institute, London, Ontario N6A 5B7

    2014-05-15

    Purpose: Contrast enhancement and respiration management are widely used during image acquisition for radiotherapy treatment planning of liver tumors along with respiration management at the treatment unit. However, neither respiration management nor intravenous contrast is commonly used during cone-beam CT (CBCT) image acquisition for alignment prior to radiotherapy. In this study, the authors investigate the potential gains of injecting an iodinated contrast agent in combination with respiration management during CBCT acquisition for liver tumor radiotherapy. Methods: Five rabbits with implanted liver tumors were subjected to CBCT with and without motion management and contrast injection. The acquired CBCT images were registeredmore » to the planning CT to determine alignment accuracy and dosimetric impact. The authors developed a simulation tool for simulating contrast-enhanced CBCT images from dynamic contrast enhanced CT imaging (DCE-CT) to determine optimal contrast injection protocols. The tool was validated against contrast-enhanced CBCT of the rabbit subjects and was used for five human patients diagnosed with hepatocellular carcinoma. Results: In the rabbit experiment, when neither motion management nor contrast was used, tumor centroid misalignment between planning image and CBCT was 9.2 mm. This was reduced to 2.8 mm when both techniques were employed. Tumors were not visualized in clinical CBCT images of human subjects. Simulated contrast-enhanced CBCT was found to improve tumor contrast in all subjects. Different patients were found to require different contrast injections to maximize tumor contrast. Conclusions: Based on the authors’ animal study, respiration managed contrast enhanced CBCT improves IGRT significantly. Contrast enhanced CBCT benefits from patient specific tracer kinetics determined from DCE-CT.« less

  17. Design and development of a non-rigid phantom for the quantitative evaluation of DIR-based mapping of simulated pulmonary ventilation.

    PubMed

    Miyakawa, Shin; Tachibana, Hidenobu; Moriya, Shunsuke; Kurosawa, Tomoyuki; Nishio, Teiji; Sato, Masanori

    2018-05-28

    The validation of deformable image registration (DIR)-based pulmonary ventilation mapping is time-consuming and prone to inaccuracies and is also affected by deformation parameters. In this study, we developed a non-rigid phantom as a quality assurance (QA) tool that simulates ventilation to evaluate DIR-based images quantitatively. The phantom consists of an acrylic cylinder filled with polyurethane foam designed to simulate pulmonic alveoli. A polyurethane membrane is attached to the inferior end of the phantom to simulate the diaphragm. In addition, tracheobronchial-tree-shaped polyurethane tubes are inserted through the foam and converge outside the phantom to simulate the trachea. Solid polyurethane is also used to model arteries, which closely follow the model airways. Two three-dimensional CT scans were performed during exhalation and inhalation phases using xenon (Xe) gas as the inhaled contrast agent. The exhalation 3D-CT image is deformed to an inhalation 3D-CT image using our in-house program based on the NiftyReg open-source package. The target registration error (TRE) between the two images was calculated for 16 landmarks located in the simulated lung volume. The DIR-based ventilation image was generated using Jacobian determinant (JD) metrics. Subsequently, differences in the Hounsfield unit (HU) values between the two images were measured. The correlation coefficient between the JD and HU differences was calculated. In addition, three 4D-CT scans are performed to evaluate the reproducibility of the phantom motion and Xe gas distribution. The phantom exhibited a variety of displacements for each landmark (range: 1-20 mm). The reproducibility analysis indicated that the location differences were < 1 mm for all landmarks, and the HU variation in the Xe gas distribution was close to zero. The mean TRE in the evaluation of spatial accuracy according to the DIR software was 1.47 ± 0.71 mm (maximum: 2.6 mm). The relationship between the JD and HU differences had a large correlation (R = -0.71) for the DIR software. The phantom implemented new features, namely, deformation and simulated ventilation. To assess the accuracy of the DIR-based mapping of the simulated pulmonary ventilation, the phantom allows for simulation of Xe gas wash-in and wash-out. The phantom may be an effective QA tool, because the DIR algorithm can be quickly changed and its accuracy evaluated with a high degree of precision. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  18. TestSTORM: Simulator for optimizing sample labeling and image acquisition in localization based super-resolution microscopy

    PubMed Central

    Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E.; Kaminski, Clemens F.; Szabó, Gábor; Erdélyi, Miklós

    2014-01-01

    Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813

  19. STSE: Spatio-Temporal Simulation Environment Dedicated to Biology.

    PubMed

    Stoma, Szymon; Fröhlich, Martina; Gerber, Susanne; Klipp, Edda

    2011-04-28

    Recently, the availability of high-resolution microscopy together with the advancements in the development of biomarkers as reporters of biomolecular interactions increased the importance of imaging methods in molecular cell biology. These techniques enable the investigation of cellular characteristics like volume, size and geometry as well as volume and geometry of intracellular compartments, and the amount of existing proteins in a spatially resolved manner. Such detailed investigations opened up many new areas of research in the study of spatial, complex and dynamic cellular systems. One of the crucial challenges for the study of such systems is the design of a well stuctured and optimized workflow to provide a systematic and efficient hypothesis verification. Computer Science can efficiently address this task by providing software that facilitates handling, analysis, and evaluation of biological data to the benefit of experimenters and modelers. The Spatio-Temporal Simulation Environment (STSE) is a set of open-source tools provided to conduct spatio-temporal simulations in discrete structures based on microscopy images. The framework contains modules to digitize, represent, analyze, and mathematically model spatial distributions of biochemical species. Graphical user interface (GUI) tools provided with the software enable meshing of the simulation space based on the Voronoi concept. In addition, it supports to automatically acquire spatial information to the mesh from the images based on pixel luminosity (e.g. corresponding to molecular levels from microscopy images). STSE is freely available either as a stand-alone version or included in the linux live distribution Systems Biology Operational Software (SB.OS) and can be downloaded from http://www.stse-software.org/. The Python source code as well as a comprehensive user manual and video tutorials are also offered to the research community. We discuss main concepts of the STSE design and workflow. We demonstrate it's usefulness using the example of a signaling cascade leading to formation of a morphological gradient of Fus3 within the cytoplasm of the mating yeast cell Saccharomyces cerevisiae. STSE is an efficient and powerful novel platform, designed for computational handling and evaluation of microscopic images. It allows for an uninterrupted workflow including digitization, representation, analysis, and mathematical modeling. By providing the means to relate the simulation to the image data it allows for systematic, image driven model validation or rejection. STSE can be scripted and extended using the Python language. STSE should be considered rather as an API together with workflow guidelines and a collection of GUI tools than a stand alone application. The priority of the project is to provide an easy and intuitive way of extending and customizing software using the Python language.

  20. On the simulation and mitigation of anisoplanatic optical turbulence for long range imaging

    NASA Astrophysics Data System (ADS)

    Hardie, Russell C.; LeMaster, Daniel A.

    2017-05-01

    We describe a numerical wave propagation method for simulating long range imaging of an extended scene under anisoplanatic conditions. Our approach computes an array of point spread functions (PSFs) for a 2D grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. To validate the simulation we compare simulated outputs with the theoretical anisoplanatic tilt correlation and differential tilt variance. This is in addition to comparing the long- and short-exposure PSFs, and isoplanatic angle. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. The simulation tool is also used here to quantitatively evaluate a recently proposed block- matching and Wiener filtering (BMWF) method for turbulence mitigation. In this method block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged and processed with a Wiener filter for restoration. A novel aspect of the proposed BMWF method is that the PSF model used for restoration takes into account the level of geometric correction achieved during image registration. This way, the Wiener filter is able fully exploit the reduced blurring achieved by registration. The BMWF method is relatively simple computationally, and yet, has excellent performance in comparison to state-of-the-art benchmark methods.

  1. Radio-frequency energy quantification in magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Alon, Leeor

    Mapping of radio frequency (RF) energy deposition has been challenging for 50+ years, especially, when scanning patients in the magnetic resonance imaging (MRI) environment. As result, electromagnetic simulation software is often used for estimating the specific absorption rate (SAR), the rate of RF energy deposition in tissue. The thesis work presents challenges associated with aligning information provided by electromagnetic simulation and MRI experiments. As result of the limitations of simulations, experimental methods for the quantification of SAR were established. A system for quantification of the total RF energy deposition was developed for parallel transmit MRI (a system that uses multiple antennas to excite and image the body). The system is capable of monitoring and predicting channel-by-channel RF energy deposition, whole body SAR and capable of tracking potential hardware failures that occur in the transmit chain and may cause the deposition of excessive energy into patients. Similarly, we demonstrated that local RF power deposition can be mapped and predicted for parallel transmit systems based on a series of MRI temperature mapping acquisitions. Resulting from the work, we developed tools for optimal reconstruction temperature maps from MRI acquisitions. The tools developed for temperature mapping paved the way for utilizing MRI as a diagnostic tool for evaluation of RF/microwave emitting device safety. Quantification of the RF energy was demonstrated for both MRI compatible and non-MRI-compatible devices (such as cell phones), while having the advantage of being noninvasive, of providing millimeter resolution and high accuracy.

  2. Dosimetry applications in GATE Monte Carlo toolkit.

    PubMed

    Papadimitroulas, Panagiotis

    2017-09-01

    Monte Carlo (MC) simulations are a well-established method for studying physical processes in medical physics. The purpose of this review is to present GATE dosimetry applications on diagnostic and therapeutic simulated protocols. There is a significant need for accurate quantification of the absorbed dose in several specific applications such as preclinical and pediatric studies. GATE is an open-source MC toolkit for simulating imaging, radiotherapy (RT) and dosimetry applications in a user-friendly environment, which is well validated and widely accepted by the scientific community. In RT applications, during treatment planning, it is essential to accurately assess the deposited energy and the absorbed dose per tissue/organ of interest, as well as the local statistical uncertainty. Several types of realistic dosimetric applications are described including: molecular imaging, radio-immunotherapy, radiotherapy and brachytherapy. GATE has been efficiently used in several applications, such as Dose Point Kernels, S-values, Brachytherapy parameters, and has been compared against various MC codes which are considered as standard tools for decades. Furthermore, the presented studies show reliable modeling of particle beams when comparing experimental with simulated data. Examples of different dosimetric protocols are reported for individualized dosimetry and simulations combining imaging and therapy dose monitoring, with the use of modern computational phantoms. Personalization of medical protocols can be achieved by combining GATE MC simulations with anthropomorphic computational models and clinical anatomical data. This is a review study, covering several dosimetric applications of GATE, and the different tools used for modeling realistic clinical acquisitions with accurate dose assessment. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  3. IFU simulator: a powerful alignment and performance tool for MUSE instrument

    NASA Astrophysics Data System (ADS)

    Laurent, Florence; Boudon, Didier; Daguisé, Eric; Dubois, Jean-Pierre; Jarno, Aurélien; Kosmalski, Johan; Piqueras, Laure; Remillieux, Alban; Renault, Edgard

    2014-07-01

    MUSE (Multi Unit Spectroscopic Explorer) is a second generation Very Large Telescope (VLT) integral field spectrograph (1x1arcmin² Field of View) developed for the European Southern Observatory (ESO), operating in the visible wavelength range (0.465-0.93 μm). A consortium of seven institutes is currently commissioning MUSE in the Very Large Telescope for the Preliminary Acceptance in Chile, scheduled for September, 2014. MUSE is composed of several subsystems which are under the responsibility of each institute. The Fore Optics derotates and anamorphoses the image at the focal plane. A Splitting and Relay Optics feed the 24 identical Integral Field Units (IFU), that are mounted within a large monolithic instrument mechanical structure. Each IFU incorporates an image slicer, a fully refractive spectrograph with VPH-grating and a detector system connected to a global vacuum and cryogenic system. During 2012 and 2013, all MUSE subsystems were integrated, aligned and tested to the P.I. institute at Lyon. After successful PAE in September 2013, MUSE instrument was shipped to the Very Large Telescope in Chile where that was aligned and tested in ESO integration hall at Paranal. After, MUSE was directly transferred in monolithic way without dismounting onto VLT telescope where the first light was overcame. This talk describes the IFU Simulator which is the main alignment and performance tool for MUSE instrument. The IFU Simulator mimics the optomechanical interface between the MUSE pre-optic and the 24 IFUs. The optomechanical design is presented. After, the alignment method of this innovative tool for identifying the pupil and image planes is depicted. At the end, the internal test report is described. The success of the MUSE alignment using the IFU Simulator is demonstrated by the excellent results obtained onto MUSE positioning, image quality and throughput. MUSE commissioning at the VLT is planned for September, 2014.

  4. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance.

    PubMed

    Dong, Han; Sharma, Diksha; Badano, Aldo

    2014-12-01

    Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridmantis, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webmantis and visualmantis to facilitate the setup of computational experiments via hybridmantis. The visualization tools visualmantis and webmantis enable the user to control simulation properties through a user interface. In the case of webmantis, control via a web browser allows access through mobile devices such as smartphones or tablets. webmantis acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridmantis. The users can download the output images and statistics through a zip file for future reference. In addition, webmantis provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. The visualization tools visualmantis and webmantis provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.

  5. Recent advances in 3D computed tomography techniques for simulation and navigation in hepatobiliary pancreatic surgery.

    PubMed

    Uchida, Masafumi

    2014-04-01

    A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.

  6. Three-Dimensional Imaging in Rhinoplasty: A Comparison of the Simulated versus Actual Result.

    PubMed

    Persing, Sarah; Timberlake, Andrew; Madari, Sarika; Steinbacher, Derek

    2018-05-22

    Computer imaging has become increasingly popular for rhinoplasty. Three-dimensional (3D) analysis permits a more comprehensive view from multiple vantage points. However, the predictability and concordance between the simulated and actual result have not been morphometrically studied. The purpose of this study was to aesthetically and quantitatively compare the simulated to actual rhinoplasty result. A retrospective review of 3D images (VECTRA, Canfield) for rhinoplasty patients was performed. Images (preop, simulated, and actual) were randomized. A blinded panel of physicians rated the images (1 = poor, 5 = excellent). The image series considered "best" was also recorded. A quantitative assessment of nasolabial angle and tip projection was compared. Paired and two-sample t tests were performed for statistical analysis (P < 0.05 as significant). Forty patients were included. 67.5% of preoperative images were rated as poor (mean = 1.7). The simulation received a mean score of 2.9 (good in 60% of cases). 82.5% of actual cases were rated good to excellent (mean 3.4) (P < 0.001). Overall, the panel significantly preferred the actual postoperative result in 77.5% of cases compared to the simulation in 22.5% of cases (P < 0.001). The actual nasal tip was more projected compared to the simulations for both males and females. There was no significant difference in nasal tip rotation between simulated and postoperative groups. 3D simulation is a powerful communication and planning tool in rhinoplasty. In this study, the actual result was deemed more aesthetic than the simulated image. Surgeon experience is important to translate the plan and achieve favorable postoperative results. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  7. Modeling laser speckle imaging of perfusion in the skin (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Regan, Caitlin; Hayakawa, Carole K.; Choi, Bernard

    2016-02-01

    Laser speckle imaging (LSI) enables visualization of relative blood flow and perfusion in the skin. It is frequently applied to monitor treatment of vascular malformations such as port wine stain birthmarks, and measure changes in perfusion due to peripheral vascular disease. We developed a computational Monte Carlo simulation of laser speckle contrast imaging to quantify how tissue optical properties, blood vessel depths and speeds, and tissue perfusion affect speckle contrast values originating from coherent excitation. The simulated tissue geometry consisted of multiple layers to simulate the skin, or incorporated an inclusion such as a vessel or tumor at different depths. Our simulation used a 30x30mm uniform flat light source to optically excite the region of interest in our sample to better mimic wide-field imaging. We used our model to simulate how dynamically scattered photons from a buried blood vessel affect speckle contrast at different lateral distances (0-1mm) away from the vessel, and how these speckle contrast changes vary with depth (0-1mm) and flow speed (0-10mm/s). We applied the model to simulate perfusion in the skin, and observed how different optical properties, such as epidermal melanin concentration (1%-50%) affected speckle contrast. We simulated perfusion during a systolic forearm occlusion and found that contrast decreased by 35% (exposure time = 10ms). Monte Carlo simulations of laser speckle contrast give us a tool to quantify what regions of the skin are probed with laser speckle imaging, and measure how the tissue optical properties and blood flow affect the resulting images.

  8. The Java Image Science Toolkit (JIST) for rapid prototyping and publishing of neuroimaging software.

    PubMed

    Lucas, Blake C; Bogovic, John A; Carass, Aaron; Bazin, Pierre-Louis; Prince, Jerry L; Pham, Dzung L; Landman, Bennett A

    2010-03-01

    Non-invasive neuroimaging techniques enable extraordinarily sensitive and specific in vivo study of the structure, functional response and connectivity of biological mechanisms. With these advanced methods comes a heavy reliance on computer-based processing, analysis and interpretation. While the neuroimaging community has produced many excellent academic and commercial tool packages, new tools are often required to interpret new modalities and paradigms. Developing custom tools and ensuring interoperability with existing tools is a significant hurdle. To address these limitations, we present a new framework for algorithm development that implicitly ensures tool interoperability, generates graphical user interfaces, provides advanced batch processing tools, and, most importantly, requires minimal additional programming or computational overhead. Java-based rapid prototyping with this system is an efficient and practical approach to evaluate new algorithms since the proposed system ensures that rapidly constructed prototypes are actually fully-functional processing modules with support for multiple GUI's, a broad range of file formats, and distributed computation. Herein, we demonstrate MRI image processing with the proposed system for cortical surface extraction in large cross-sectional cohorts, provide a system for fully automated diffusion tensor image analysis, and illustrate how the system can be used as a simulation framework for the development of a new image analysis method. The system is released as open source under the Lesser GNU Public License (LGPL) through the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC).

  9. The Java Image Science Toolkit (JIST) for Rapid Prototyping and Publishing of Neuroimaging Software

    PubMed Central

    Lucas, Blake C.; Bogovic, John A.; Carass, Aaron; Bazin, Pierre-Louis; Prince, Jerry L.; Pham, Dzung

    2010-01-01

    Non-invasive neuroimaging techniques enable extraordinarily sensitive and specific in vivo study of the structure, functional response and connectivity of biological mechanisms. With these advanced methods comes a heavy reliance on computer-based processing, analysis and interpretation. While the neuroimaging community has produced many excellent academic and commercial tool packages, new tools are often required to interpret new modalities and paradigms. Developing custom tools and ensuring interoperability with existing tools is a significant hurdle. To address these limitations, we present a new framework for algorithm development that implicitly ensures tool interoperability, generates graphical user interfaces, provides advanced batch processing tools, and, most importantly, requires minimal additional programming or computational overhead. Java-based rapid prototyping with this system is an efficient and practical approach to evaluate new algorithms since the proposed system ensures that rapidly constructed prototypes are actually fully-functional processing modules with support for multiple GUI's, a broad range of file formats, and distributed computation. Herein, we demonstrate MRI image processing with the proposed system for cortical surface extraction in large cross-sectional cohorts, provide a system for fully automated diffusion tensor image analysis, and illustrate how the system can be used as a simulation framework for the development of a new image analysis method. The system is released as open source under the Lesser GNU Public License (LGPL) through the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC). PMID:20077162

  10. Monte Carlo simulations in Nuclear Medicine

    NASA Astrophysics Data System (ADS)

    Loudos, George K.

    2007-11-01

    Molecular imaging technologies provide unique abilities to localise signs of disease before symptoms appear, assist in drug testing, optimize and personalize therapy, and assess the efficacy of treatment regimes for different types of cancer. Monte Carlo simulation packages are used as an important tool for the optimal design of detector systems. In addition they have demonstrated potential to improve image quality and acquisition protocols. Many general purpose (MCNP, Geant4, etc) or dedicated codes (SimSET etc) have been developed aiming to provide accurate and fast results. Special emphasis will be given to GATE toolkit. The GATE code currently under development by the OpenGATE collaboration is the most accurate and promising code for performing realistic simulations. The purpose of this article is to introduce the non expert reader to the current status of MC simulations in nuclear medicine and briefly provide examples of current simulated systems, and present future challenges that include simulation of clinical studies and dosimetry applications.

  11. NeuroManager: a workflow analysis based simulation management engine for computational neuroscience

    PubMed Central

    Stockton, David B.; Santamaria, Fidel

    2015-01-01

    We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach (1) provides flexibility to adapt to a variety of neuroscience simulators, (2) simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and (3) improves tracking of simulator/simulation evolution. We implemented NeuroManager in MATLAB, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in 22 stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to MATLAB's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project. PMID:26528175

  12. NeuroManager: a workflow analysis based simulation management engine for computational neuroscience.

    PubMed

    Stockton, David B; Santamaria, Fidel

    2015-01-01

    We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach (1) provides flexibility to adapt to a variety of neuroscience simulators, (2) simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and (3) improves tracking of simulator/simulation evolution. We implemented NeuroManager in MATLAB, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in 22 stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to MATLAB's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project.

  13. Radio Frequency Ablation Registration, Segmentation, and Fusion Tool

    PubMed Central

    McCreedy, Evan S.; Cheng, Ruida; Hemler, Paul F.; Viswanathan, Anand; Wood, Bradford J.; McAuliffe, Matthew J.

    2008-01-01

    The Radio Frequency Ablation Segmentation Tool (RFAST) is a software application developed using NIH's Medical Image Processing Analysis and Visualization (MIPAV) API for the specific purpose of assisting physicians in the planning of radio frequency ablation (RFA) procedures. The RFAST application sequentially leads the physician through the steps necessary to register, fuse, segment, visualize and plan the RFA treatment. Three-dimensional volume visualization of the CT dataset with segmented 3D surface models enables the physician to interactively position the ablation probe to simulate burns and to semi-manually simulate sphere packing in an attempt to optimize probe placement. PMID:16871716

  14. Dressing the Coronal Magnetic Extrapolations of Active Regions with a Parameterized Thermal Structure

    NASA Astrophysics Data System (ADS)

    Nita, Gelu M.; Viall, Nicholeen M.; Klimchuk, James A.; Loukitcheva, Maria A.; Gary, Dale E.; Kuznetsov, Alexey A.; Fleishman, Gregory D.

    2018-01-01

    The study of time-dependent solar active region (AR) morphology and its relation to eruptive events requires analysis of imaging data obtained in multiple wavelength domains with differing spatial and time resolution, ideally in combination with 3D physical models. To facilitate this goal, we have undertaken a major enhancement of our IDL-based simulation tool, GX_Simulator, previously developed for modeling microwave and X-ray emission from flaring loops, to allow it to simulate quiescent emission from solar ARs. The framework includes new tools for building the atmospheric model and enhanced routines for calculating emission that include new wavelengths. In this paper, we use our upgraded tool to model and analyze an AR and compare the synthetic emission maps with observations. We conclude that the modeled magneto-thermal structure is a reasonably good approximation of the real one.

  15. Robust multi-site MR data processing: iterative optimization of bias correction, tissue classification, and registration.

    PubMed

    Young Kim, Eun; Johnson, Hans J

    2013-01-01

    A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.

  16. Multifunctional microbubbles and nanobubbles for photoacoustic and ultrasound imaging

    PubMed Central

    Kim, Chulhong; Qin, Ruogu; Xu, Jeff S.; Wang, Lihong V.; Xu, Ronald

    2010-01-01

    We develop a novel dual-modal contrast agent—encapsulated-ink poly(lactic-co-glycolic acid) (PLGA) microbubbles and nanobubbles—for photoacoustic and ultrasound imaging. Soft gelatin phantoms with embedded tumor simulators of encapsulated-ink PLGA microbubbles and nanobubbles in various concentrations are clearly shown in both photoacoustic and ultrasound images. In addition, using photoacoustic imaging, we successfully image the samples positioned below 1.8-cm-thick chicken breast tissues. Potentially, simultaneous photoacoustic and ultrasound imaging enhanced by encapsulated-dye PLGA microbubbles or nanobubbles can be a valuable tool for intraoperative assessment of tumor boundaries and therapeutic margins. PMID:20210423

  17. A novel medical image data-based multi-physics simulation platform for computational life sciences.

    PubMed

    Neufeld, Esra; Szczerba, Dominik; Chavannes, Nicolas; Kuster, Niels

    2013-04-06

    Simulating and modelling complex biological systems in computational life sciences requires specialized software tools that can perform medical image data-based modelling, jointly visualize the data and computational results, and handle large, complex, realistic and often noisy anatomical models. The required novel solvers must provide the power to model the physics, biology and physiology of living tissue within the full complexity of the human anatomy (e.g. neuronal activity, perfusion and ultrasound propagation). A multi-physics simulation platform satisfying these requirements has been developed for applications including device development and optimization, safety assessment, basic research, and treatment planning. This simulation platform consists of detailed, parametrized anatomical models, a segmentation and meshing tool, a wide range of solvers and optimizers, a framework for the rapid development of specialized and parallelized finite element method solvers, a visualization toolkit-based visualization engine, a Python scripting interface for customized applications, a coupling framework, and more. Core components are cross-platform compatible and use open formats. Several examples of applications are presented: hyperthermia cancer treatment planning, tumour growth modelling, evaluating the magneto-haemodynamic effect as a biomarker and physics-based morphing of anatomical models.

  18. Comparison of different phantoms used in digital diagnostic imaging

    NASA Astrophysics Data System (ADS)

    Bor, Dogan; Unal, Elif; Uslu, Anil

    2015-09-01

    The organs of extremity, chest, skull and lumbar were physically simulated using uniform PMMA slabs with different thicknesses alone and using these slabs together with aluminum plates and air gaps (ANSI Phantoms). The variation of entrance surface air kerma and scatter fraction with X-ray beam qualities was investigated for these phantoms and the results were compared with those measured from anthropomorphic phantoms. A flat panel digital radiographic system was used for all the experiments. Considerable variations of entrance surface air kermas were found for the same organs of different designs, and highest doses were measured for the PMMA slabs. A low contrast test tool and a contrast detail test object (CDRAD) were used together with each organ simulation of PMMA slabs and ANSI phantoms in order to test the clinical image qualities. Digital images of these phantom combinations and anthropomorphic phantoms were acquired in raw and clinically processed formats. Variation of image quality with kVp and post processing was evaluated using the numerical metrics of these test tools and measured contrast values from the anthropomorphic phantoms. Our results indicated that design of some phantoms may not be efficient enough to reveal the expected performance of the post processing algorithms.

  19. An integrated system for dissolution studies and magnetic resonance imaging of controlled release, polymer-based dosage forms-a tool for quantitative assessment of hydrogel formation processes.

    PubMed

    Kulinowski, Piotr; Dorozyński, Przemysław; Jachowicz, Renata; Weglarz, Władysław P

    2008-11-04

    Controlled release (CR) dosage forms are often based on polymeric matrices, e.g., sustained-release tablets and capsules. It is crucial to visualise and quantify processes of the hydrogel formation during the standard dissolution study. A method for imaging of CR, polymer-based dosage forms during dissolution study in vitro is presented. Imaging was performed in a non-invasive way by means of the magnetic resonance imaging (MRI). This study was designed to simulate in vivo conditions regarding temperature, volume, state and composition of dissolution media. Two formulations of hydrodynamically balanced systems (HBS) were chosen as model CR dosage forms. HBS release active substance in stomach while floating on the surface of the gastric content. Time evolutions of the diffusion region, hydrogel formation region and "dry core" region were obtained during a dissolution study of L-dopa as a model drug in two simulated gastric fluids (i.e. in fed and fasted state). This method seems to be a very promising tool for examining properties of new formulations of CR, polymer-based dosage forms or for comparison of generic and originator dosage forms before carrying out bioequivalence studies.

  20. Integration of SimSET photon history generator in GATE for efficient Monte Carlo simulations of pinhole SPECT.

    PubMed

    Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J S; Tsui, Benjamin M W

    2008-07-01

    The authors developed and validated an efficient Monte Carlo simulation (MCS) workflow to facilitate small animal pinhole SPECT imaging research. This workflow seamlessly integrates two existing MCS tools: simulation system for emission tomography (SimSET) and GEANT4 application for emission tomography (GATE). Specifically, we retained the strength of GATE in describing complex collimator/detector configurations to meet the anticipated needs for studying advanced pinhole collimation (e.g., multipinhole) geometry, while inserting the fast SimSET photon history generator (PHG) to circumvent the relatively slow GEANT4 MCS code used by GATE in simulating photon interactions inside voxelized phantoms. For validation, data generated from this new SimSET-GATE workflow were compared with those from GATE-only simulations as well as experimental measurements obtained using a commercial small animal pinhole SPECT system. Our results showed excellent agreement (e.g., in system point response functions and energy spectra) between SimSET-GATE and GATE-only simulations, and, more importantly, a significant computational speedup (up to approximately 10-fold) provided by the new workflow. Satisfactory agreement between MCS results and experimental data were also observed. In conclusion, the authors have successfully integrated SimSET photon history generator in GATE for fast and realistic pinhole SPECT simulations, which can facilitate research in, for example, the development and application of quantitative pinhole and multipinhole SPECT for small animal imaging. This integrated simulation tool can also be adapted for studying other preclinical and clinical SPECT techniques.

  1. Classification and printability of EUV mask defects from SEM images

    NASA Astrophysics Data System (ADS)

    Cho, Wonil; Price, Daniel; Morgan, Paul A.; Rost, Daniel; Satake, Masaki; Tolani, Vikram L.

    2017-10-01

    Classification and Printability of EUV Mask Defects from SEM images EUV lithography is starting to show more promise for patterning some critical layers at 5nm technology node and beyond. However, there still are many key technical obstacles to overcome before bringing EUV Lithography into high volume manufacturing (HVM). One of the greatest obstacles is manufacturing defect-free masks. For pattern defect inspections in the mask-shop, cutting-edge 193nm optical inspection tools have been used so far due to lacking any e-beam mask inspection (EBMI) or EUV actinic pattern inspection (API) tools. The main issue with current 193nm inspection tools is the limited resolution for mask dimensions targeted for EUV patterning. The theoretical resolution limit for 193nm mask inspection tools is about 60nm HP on masks, which means that main feature sizes on EUV masks will be well beyond the practical resolution of 193nm inspection tools. Nevertheless, 193nm inspection tools with various illumination conditions that maximize defect sensitivity and/or main-pattern modulation are being explored for initial EUV defect detection. Due to the generally low signal-to-noise in the 193nm inspection imaging at EUV patterning dimensions, these inspections often result in hundreds and thousands of defects which then need to be accurately reviewed and dispositioned. Manually reviewing each defect is difficult due to poor resolution. In addition, the lack of a reliable aerial dispositioning system makes it very challenging to disposition for printability. In this paper, we present the use of SEM images of EUV masks for higher resolution review and disposition of defects. In this approach, most of the defects detected by the 193nm inspection tools are first imaged on a mask SEM tool. These images together with the corresponding post-OPC design clips are provided to KLA-Tencor's Reticle Decision Center (RDC) platform which provides ADC (Automated Defect Classification) and S2A (SEM-to-Aerial printability) analysis of every defect. First, a defect-free or reference mask SEM is rendered from the post-OPC design, and the defective signature is detected from the defect-reference difference image. These signatures help assess the true nature of the defect as evident in e-beam imaging; for example, excess or missing absorber, line-edge roughness, contamination, etc. Next, defect and reference contours are extracted from the grayscale SEM images and fed into the simulation engine with an EUV scanner model to generate corresponding EUV defect and reference aerial images. These are then analyzed for printability and dispositioned using an Aerial Image Analyzer (AIA) application to automatically measure and determine the amount of CD errors. Thus by integrating EUV ADC and S2A applications together, every defect detection is characterized for its type and printability which is essential for not only determining which defects to repair, but also in monitoring the performance of EUV mask process tools. The accuracy of the S2A print modeling has been verified with other commercially-available simulators, and will also be verified with actual wafer print results. With EUV lithography progressing towards volume manufacturing at 5nm technology, and the likelihood of EBMI inspectors approaching the horizon, the EUV ADC-S2A system will continue serving an essential role of dispositioning defects off e-beam imaging.

  2. The capability of lithography simulation based on MVM-SEM® system

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Shingo; Fujii, Nobuaki; Kanno, Koichi; Imai, Hidemichi; Hayano, Katsuya; Miyashita, Hiroyuki; Shida, Soichi; Murakawa, Tsutomu; Kuribara, Masayuki; Matsumoto, Jun; Nakamura, Takayuki; Matsushita, Shohei; Hara, Daisuke; Pang, Linyong

    2015-10-01

    The 1Xnm technology node lithography is using SMO-ILT, NTD or more complex pattern. Therefore in mask defect inspection, defect verification becomes more difficult because many nuisance defects are detected in aggressive mask feature. One key Technology of mask manufacture is defect verification to use aerial image simulator or other printability simulation. AIMS™ Technology is excellent correlation for the wafer and standards tool for defect verification however it is difficult for verification over hundred numbers or more. We reported capability of defect verification based on lithography simulation with a SEM system that architecture and software is excellent correlation for simple line and space.[1] In this paper, we use a SEM system for the next generation combined with a lithography simulation tool for SMO-ILT, NTD and other complex pattern lithography. Furthermore we will use three dimension (3D) lithography simulation based on Multi Vision Metrology SEM system. Finally, we will confirm the performance of the 2D and 3D lithography simulation based on SEM system for a photomask verification.

  3. Comprehensive Modeling and Visualization of Cardiac Anatomy and Physiology from CT Imaging and Computer Simulations

    PubMed Central

    Sun, Peng; Zhou, Haoyin; Ha, Seongmin; Hartaigh, Bríain ó; Truong, Quynh A.; Min, James K.

    2016-01-01

    In clinical cardiology, both anatomy and physiology are needed to diagnose cardiac pathologies. CT imaging and computer simulations provide valuable and complementary data for this purpose. However, it remains challenging to gain useful information from the large amount of high-dimensional diverse data. The current tools are not adequately integrated to visualize anatomic and physiologic data from a complete yet focused perspective. We introduce a new computer-aided diagnosis framework, which allows for comprehensive modeling and visualization of cardiac anatomy and physiology from CT imaging data and computer simulations, with a primary focus on ischemic heart disease. The following visual information is presented: (1) Anatomy from CT imaging: geometric modeling and visualization of cardiac anatomy, including four heart chambers, left and right ventricular outflow tracts, and coronary arteries; (2) Function from CT imaging: motion modeling, strain calculation, and visualization of four heart chambers; (3) Physiology from CT imaging: quantification and visualization of myocardial perfusion and contextual integration with coronary artery anatomy; (4) Physiology from computer simulation: computation and visualization of hemodynamics (e.g., coronary blood velocity, pressure, shear stress, and fluid forces on the vessel wall). Substantially, feedback from cardiologists have confirmed the practical utility of integrating these features for the purpose of computer-aided diagnosis of ischemic heart disease. PMID:26863663

  4. RAY-UI: A powerful and extensible user interface for RAY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumgärtel, P., E-mail: peter.baumgaertel@helmholtz-berlin.de; Erko, A.; Schäfers, F.

    2016-07-27

    The RAY-UI project started as a proof-of-concept for an interactive and graphical user interface (UI) for the well-known ray tracing software RAY [1]. In the meantime, it has evolved into a powerful enhanced version of RAY that will serve as the platform for future development and improvement of associated tools. The software as of today supports nearly all sophisticated simulation features of RAY. Furthermore, it delivers very significant usability and work efficiency improvements. Beamline elements can be quickly added or removed in the interactive sequence view. Parameters of any selected element can be accessed directly and in arbitrary order. Withmore » a single click, parameter changes can be tested and new simulation results can be obtained. All analysis results can be explored interactively right after ray tracing by means of powerful integrated image viewing and graphing tools. Unlimited image planes can be positioned anywhere in the beamline, and bundles of image planes can be created for moving the plane along the beam to identify the focus position with live updates of the simulated results. In addition to showing the features and workflow of RAY-UI, we will give an overview of the underlying software architecture as well as examples for use and an outlook for future developments.« less

  5. EnGeoMAP - geological applications within the EnMAP hyperspectral satellite science program

    NASA Astrophysics Data System (ADS)

    Boesche, N. K.; Mielke, C.; Rogass, C.; Guanter, L.

    2016-12-01

    Hyperspectral investigations from near field to space substantially contribute to geological exploration and mining monitoring of raw material and mineral deposits. Due to their spectral characteristics, large mineral occurrences and minefields can be identified from space and the spatial distribution of distinct proxy minerals be mapped. In the frame of the EnMAP hyperspectral satellite science program a mineral and elemental mapping tool was developed - the EnGeoMAP. It contains a basic mineral mapping and a rare earth element mapping approach. This study shows the performance of EnGeoMAP based on simulated EnMAP data of the rare earth element bearing Mountain Pass Carbonatite Complex, USA, and the Rodalquilar and Lomilla Calderas, Spain, which host the economically relevant gold-silver, lead-zinc-silver-gold and alunite deposits. The mountain pass image data was simulated on the basis of AVIRIS Next Generation images, while the Rodalquilar data is based on HyMap images. The EnGeoMAP - Base approach was applied to both images, while the mountain pass image data were additionally analysed using the EnGeoMAP - REE software tool. The results are mineral and elemental maps that serve as proxies for the regional lithology and deposit types. The validation of the maps is based on chemical analyses of field samples. Current airborne sensors meet the spatial and spectral requirements for detailed mineral mapping and future hyperspectral space borne missions will additionally provide a large coverage. For those hyperspectral missions, EnGeoMAP is a rapid data analysis tool that is provided to spectral geologists working in mineral exploration.

  6. Analysis of two dimensional signals via curvelet transform

    NASA Astrophysics Data System (ADS)

    Lech, W.; Wójcik, W.; Kotyra, A.; Popiel, P.; Duk, M.

    2007-04-01

    This paper describes an application of curvelet transform analysis problem of interferometric images. Comparing to two-dimensional wavelet transform, curvelet transform has higher time-frequency resolution. This article includes numerical experiments, which were executed on random interferometric image. In the result of nonlinear approximations, curvelet transform obtains matrix with smaller number of coefficients than is guaranteed by wavelet transform. Additionally, denoising simulations show that curvelet could be a very good tool to remove noise from images.

  7. Performance modeling & simulation of complex systems (A systems engineering design & analysis approach)

    NASA Technical Reports Server (NTRS)

    Hall, Laverne

    1995-01-01

    Modeling of the Multi-mission Image Processing System (MIPS) will be described as an example of the use of a modeling tool to design a distributed system that supports multiple application scenarios. This paper examines: (a) modeling tool selection, capabilities, and operation (namely NETWORK 2.5 by CACl), (b) pointers for building or constructing a model and how the MIPS model was developed, (c) the importance of benchmarking or testing the performance of equipment/subsystems being considered for incorporation the design/architecture, (d) the essential step of model validation and/or calibration using the benchmark results, (e) sample simulation results from the MIPS model, and (f) how modeling and simulation analysis affected the MIPS design process by having a supportive and informative impact.

  8. Real-time simulation and visualization of volumetric brain deformation for image-guided neurosurgery

    NASA Astrophysics Data System (ADS)

    Ferrant, Matthieu; Nabavi, Arya; Macq, Benoit M. M.; Kikinis, Ron; Warfield, Simon K.

    2001-05-01

    During neurosurgery, the challenge for the neurosurgeon is to remove as much as possible of a tumor without destroying healthy tissue. This can be difficult because healthy and diseased tissue can have the same visual appearance. To this aim, and because the surgeon cannot see underneath the brain surface, image-guided neurosurgery systems are being increasingly used. However, during surgery, deformation of the brain occurs (due to brain shift and tumor resection), therefore causing errors in the surgical planning with respect to preoperative imaging. In our previous work, we developed software for capturing the deformation of the brain during neurosurgery. The software also allows preoperative data to be updated according to the intraoperative imaging so as to reflect the shape changes of the brain during surgery. Our goal in this paper was to rapidly visualize and characterize this deformation over the course of surgery with appropriate tools. Therefore, we developed tools allowing the doctor to visualize (in 2D and 3D) deformations, as well as the stress tensors characterizing the deformation along with the updated preoperative and intraoperative imaging during the course of surgery. Such tools significantly add to the value of intraoperative imaging and hence could improve surgical outcomes.

  9. Subresolution Displacements in Finite Difference Simulations of Ultrasound Propagation and Imaging.

    PubMed

    Pinton, Gianmarco F

    2017-03-01

    Time domain finite difference simulations are used extensively to simulate wave propagation. They approximate the wave field on a discrete domain with a grid spacing that is typically on the order of a tenth of a wavelength. The smallest displacements that can be modeled by this type of simulation are thus limited to discrete values that are integer multiples of the grid spacing. This paper presents a method to represent continuous and subresolution displacements by varying the impedance of individual elements in a multielement scatterer. It is demonstrated that this method removes the limitations imposed by the discrete grid spacing by generating a continuum of displacements as measured by the backscattered signal. The method is first validated on an ideal perfect correlation case with a single scatterer. It is subsequently applied to a more complex case with a field of scatterers that model an acoustic radiation force-induced displacement used in ultrasound elasticity imaging. A custom finite difference simulation tool is used to simulate propagation from ultrasound imaging pulses in the scatterer field. These simulated transmit-receive events are then beamformed into images, which are tracked with a correlation-based algorithm to determine the displacement. A linear predictive model is developed to analytically describe the relationship between element impedance and backscattered phase shift. The error between model and simulation is λ/ 1364 , where λ is the acoustical wavelength. An iterative method is also presented that reduces the simulation error to λ/ 5556 over one iteration. The proposed technique therefore offers a computationally efficient method to model continuous subresolution displacements of a scattering medium in ultrasound imaging. This method has applications that include ultrasound elastography, blood flow, and motion tracking. This method also extends generally to finite difference simulations of wave propagation, such as electromagnetic or seismic waves.

  10. Aberration measurement technique based on an analytical linear model of a through-focus aerial image.

    PubMed

    Yan, Guanyong; Wang, Xiangzhao; Li, Sikun; Yang, Jishuo; Xu, Dongbo; Erdmann, Andreas

    2014-03-10

    We propose an in situ aberration measurement technique based on an analytical linear model of through-focus aerial images. The aberrations are retrieved from aerial images of six isolated space patterns, which have the same width but different orientations. The imaging formulas of the space patterns are investigated and simplified, and then an analytical linear relationship between the aerial image intensity distributions and the Zernike coefficients is established. The linear relationship is composed of linear fitting matrices and rotation matrices, which can be calculated numerically in advance and utilized to retrieve Zernike coefficients. Numerical simulations using the lithography simulators PROLITH and Dr.LiTHO demonstrate that the proposed method can measure wavefront aberrations up to Z(37). Experiments on a real lithography tool confirm that our method can monitor lens aberration offset with an accuracy of 0.7 nm.

  11. Monte Carlo modeling of light-tissue interactions in narrow band imaging.

    PubMed

    Le, Du V N; Wang, Quanzeng; Ramella-Roman, Jessica C; Pfefer, T Joshua

    2013-01-01

    Light-tissue interactions that influence vascular contrast enhancement in narrow band imaging (NBI) have not been the subject of extensive theoretical study. In order to elucidate relevant mechanisms in a systematic and quantitative manner we have developed and validated a Monte Carlo model of NBI and used it to study the effect of device and tissue parameters, specifically, imaging wavelength (415 versus 540 nm) and vessel diameter and depth. Simulations provided quantitative predictions of contrast-including up to 125% improvement in small, superficial vessel contrast for 415 over 540 nm. Our findings indicated that absorption rather than scattering-the mechanism often cited in prior studies-was the dominant factor behind spectral variations in vessel depth-selectivity. Narrow-band images of a tissue-simulating phantom showed good agreement in terms of trends and quantitative values. Numerical modeling represents a powerful tool for elucidating the factors that affect the performance of spectral imaging approaches such as NBI.

  12. Measuring multielectron beam imaging fidelity with a signal-to-noise ratio analysis

    NASA Astrophysics Data System (ADS)

    Mukhtar, Maseeh; Bunday, Benjamin D.; Quoi, Kathy; Malloy, Matt; Thiel, Brad

    2016-07-01

    Java Monte Carlo Simulator for Secondary Electrons (JMONSEL) simulations are used to generate expected imaging responses of chosen test cases of patterns and defects with the ability to vary parameters for beam energy, spot size, pixel size, and/or defect material and form factor. The patterns are representative of the design rules for an aggressively scaled FinFET-type design. With these simulated images and resulting shot noise, a signal-to-noise framework is developed, which relates to defect detection probabilities. Additionally, with this infrastructure, the effect of detection chain noise and frequency-dependent system response can be made, allowing for targeting of best recipe parameters for multielectron beam inspection validation experiments. Ultimately, these results should lead to insights into how such parameters will impact tool design, including necessary doses for defect detection and estimations of scanning speeds for achieving high throughput for high-volume manufacturing.

  13. Computer simulation of turbulent jet structure radiography

    NASA Astrophysics Data System (ADS)

    Kodimer, Kory A.; Parnell, Lynn A.; Nelson, Robert S.; Papin, Patrick J.

    1992-12-01

    Liquid metal combustion chambers are under consideration as power sources for propulsion devices used in undersea vehicles. Characteristics of the reactive jet are studied to gain information about the internal combustion phenomena, including temporal and spatial variation of the jet flame, and the effects of phase changes on both the combustion and imaging processes. A ray tracing program which employs simplified Monte Carlo methods has been developed for use as a predictive tool for radiographic imaging of closed liquid metal combustors. A complex focal spot is characterized by either a monochromatic or polychromatic emission spectrum. For the simplest case, the x-ray detection system is modeled by an integrating planar detector having 100% efficiency. Several simple geometrical shapes are used to simulate jet structures contained within the combustor, such as cylinders, paraboloids, and ellipsoids. The results of the simulation and real time radiographic images are presented and discussed.

  14. Phase contrast imaging simulation and measurements using polychromatic sources with small source-object distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golosio, Bruno; Carpinelli, Massimo; Masala, Giovanni Luca

    Phase contrast imaging is a technique widely used in synchrotron facilities for nondestructive analysis. Such technique can also be implemented through microfocus x-ray tube systems. Recently, a relatively new type of compact, quasimonochromatic x-ray sources based on Compton backscattering has been proposed for phase contrast imaging applications. In order to plan a phase contrast imaging system setup, to evaluate the system performance and to choose the experimental parameters that optimize the image quality, it is important to have reliable software for phase contrast imaging simulation. Several software tools have been developed and tested against experimental measurements at synchrotron facilities devotedmore » to phase contrast imaging. However, many approximations that are valid in such conditions (e.g., large source-object distance, small transverse size of the object, plane wave approximation, monochromatic beam, and Gaussian-shaped source focal spot) are not generally suitable for x-ray tubes and other compact systems. In this work we describe a general method for the simulation of phase contrast imaging using polychromatic sources based on a spherical wave description of the beam and on a double-Gaussian model of the source focal spot, we discuss the validity of some possible approximations, and we test the simulations against experimental measurements using a microfocus x-ray tube on three types of polymers (nylon, poly-ethylene-terephthalate, and poly-methyl-methacrylate) at varying source-object distance. It will be shown that, as long as all experimental conditions are described accurately in the simulations, the described method yields results that are in good agreement with experimental measurements.« less

  15. Exploitation of realistic computational anthropomorphic phantoms for the optimization of nuclear imaging acquisition and processing protocols.

    PubMed

    Loudos, George K; Papadimitroulas, Panagiotis G; Kagadis, George C

    2014-01-01

    Monte Carlo (MC) simulations play a crucial role in nuclear medical imaging since they can provide the ground truth for clinical acquisitions, by integrating and quantifing all physical parameters that affect image quality. The last decade a number of realistic computational anthropomorphic models have been developed to serve imaging, as well as other biomedical engineering applications. The combination of MC techniques with realistic computational phantoms can provide a powerful tool for pre and post processing in imaging, data analysis and dosimetry. This work aims to create a global database for simulated Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) exams and the methodology, as well as the first elements are presented. Simulations are performed using the well validated GATE opensource toolkit, standard anthropomorphic phantoms and activity distribution of various radiopharmaceuticals, derived from literature. The resulting images, projections and sinograms of each study are provided in the database and can be further exploited to evaluate processing and reconstruction algorithms. Patient studies using different characteristics are included in the database and different computational phantoms were tested for the same acquisitions. These include the XCAT, Zubal and the Virtual Family, which some of which are used for the first time in nuclear imaging. The created database will be freely available and our current work is towards its extension by simulating additional clinical pathologies.

  16. The Development and Comparison of Molecular Dynamics Simulation and Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Chen, Jundong

    2018-03-01

    Molecular dynamics is an integrated technology that combines physics, mathematics and chemistry. Molecular dynamics method is a computer simulation experimental method, which is a powerful tool for studying condensed matter system. This technique not only can get the trajectory of the atom, but can also observe the microscopic details of the atomic motion. By studying the numerical integration algorithm in molecular dynamics simulation, we can not only analyze the microstructure, the motion of particles and the image of macroscopic relationship between them and the material, but can also study the relationship between the interaction and the macroscopic properties more conveniently. The Monte Carlo Simulation, similar to the molecular dynamics, is a tool for studying the micro-molecular and particle nature. In this paper, the theoretical background of computer numerical simulation is introduced, and the specific methods of numerical integration are summarized, including Verlet method, Leap-frog method and Velocity Verlet method. At the same time, the method and principle of Monte Carlo Simulation are introduced. Finally, similarities and differences of Monte Carlo Simulation and the molecular dynamics simulation are discussed.

  17. [Animal experimentation, computer simulation and surgical research].

    PubMed

    Carpentier, Alain

    2009-11-01

    We live in a digital world In medicine, computers are providing new tools for data collection, imaging, and treatment. During research and development of complex technologies and devices such as artificial hearts, computer simulation can provide more reliable information than experimentation on large animals. In these specific settings, animal experimentation should serve more to validate computer models of complex devices than to demonstrate their reliability.

  18. Scatterometry or imaging overlay: a comparative study

    NASA Astrophysics Data System (ADS)

    Hsu, Simon C. C.; Pai, Yuan Chi; Chen, Charlie; Yu, Chun Chi; Hsing, Henry; Wu, Hsing-Chien; Kuo, Kelly T. L.; Amir, Nuriel

    2015-03-01

    Most fabrication facilities today use imaging overlay measurement methods, as it has been the industry's reliable workhorse for decades. In the last few years, third-generation Scatterometry Overlay (SCOL™) or Diffraction Based Overlay (DBO-1) technology was developed, along another DBO technology (DBO-2). This development led to the question of where the DBO technology should be implemented for overlay measurements. Scatterometry has been adopted for high volume production in only few cases, always with imaging as a backup, but scatterometry overlay is considered by many as the technology of the future. In this paper we compare imaging overlay and DBO technologies by means of measurements and simulations. We outline issues and sensitivities for both technologies, providing guidelines for the best implementation of each. For several of the presented cases, data from two different DBO technologies are compared as well, the first with Pupil data access (DBO-1) and the other without pupil data access (DBO-2). Key indicators of overlay measurement quality include: layer coverage, accuracy, TMU, process robustness and robustness to process changes. Measurement data from real cases across the industry are compared and the conclusions are also backed by simulations. Accuracy is benchmarked with reference OVL, and self-consistency, showing good results for Imaging and DBO-1 technology. Process sensitivity and metrology robustness are mostly simulated with MTD (Metrology Target Designer) comparing the same process variations for both technologies. The experimental data presented in this study was done on ten advanced node layers and three production node layers, for all phases of the IC fabrication process (FEOL, MEOL and BEOL). The metrology tool used for most of the study is KLA-Tencor's Archer 500LCM system (scatterometry-based and imaging-based measurement technologies on the same tool) another type of tool is used for DBO-2 measurements. Finally, we conclude that both imaging overlay technology and DBO-1 technology are fully successful and have a valid roadmap for the next few design nodes, with some use cases better suited for one or the other measurement technologies. Having both imaging and DBO technology options available in parallel, allows Overlay Engineers a mix and match overlay measurement strategy, providing back up when encountering difficulties with one of the technologies and benefiting from the best of both technologies for every use case.

  19. Functional assessment of coronary artery disease by intravascular ultrasound and computational fluid dynamics simulation.

    PubMed

    Carrizo, Sebastián; Xie, Xinzhou; Peinado-Peinado, Rafael; Sánchez-Recalde, Angel; Jiménez-Valero, Santiago; Galeote-Garcia, Guillermo; Moreno, Raúl

    2014-10-01

    Clinical trials have shown that functional assessment of coronary stenosis by fractional flow reserve (FFR) improves clinical outcomes. Intravascular ultrasound (IVUS) complements conventional angiography, and is a powerful tool to assess atherosclerotic plaques and to guide percutaneous coronary intervention (PCI). Computational fluid dynamics (CFD) simulation represents a novel method for the functional assessment of coronary flow. A CFD simulation can be calculated from the data normally acquired by IVUS images. A case of coronary heart disease studied with FFR and IVUS, before and after PCI, is presented. A three-dimensional model was constructed based on IVUS images, to which CFD was applied. A discussion of the literature concerning the clinical utility of CFD simulation is provided. Copyright © 2014 Sociedade Portuguesa de Cardiologia. Published by Elsevier España. All rights reserved.

  20. Mobile phone-based evaluation of latent tuberculosis infection: Proof of concept for an integrated image capture and analysis system.

    PubMed

    Naraghi, Safa; Mutsvangwa, Tinashe; Goliath, René; Rangaka, Molebogeng X; Douglas, Tania S

    2018-05-08

    The tuberculin skin test is the most widely used method for detecting latent tuberculosis infection in adults and active tuberculosis in children. We present the development of a mobile-phone based screening tool for measuring the tuberculin skin test induration. The tool makes use of a mobile application developed on the Android platform to capture images of an induration, and photogrammetric reconstruction using Agisoft PhotoScan to reconstruct the induration in 3D, followed by 3D measurement of the induration with the aid of functions from the Python programming language. The system enables capture of images by the person being screened for latent tuberculosis infection. Measurement precision was tested using a 3D printed induration. Real-world use of the tool was simulated by application to a set of mock skin indurations, created by a make-up artist, and the performance of the tool was evaluated. The usability of the application was assessed with the aid of a questionnaire completed by participants. The tool was found to measure the 3D printed induration with greater precision than the current ruler and pen method, as indicated by the lower standard deviation produced (0.3 mm versus 1.1 mm in the literature). There was high correlation between manual and algorithm measurement of mock skin indurations. The height of the skin induration and the definition of its margins were found to influence the accuracy of 3D reconstruction and therefore the measurement error, under simulated real-world conditions. Based on assessment of the user experience in capturing images, a simplified user interface would benefit wide-spread implementation. The mobile application shows good agreement with direct measurement. It provides an alternative method for measuring tuberculin skin test indurations and may remove the need for an in-person follow-up visit after test administration, thus improving latent tuberculosis infection screening throughput. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Simulations of Aperture Synthesis Imaging Radar for the EISCAT_3D Project

    NASA Astrophysics Data System (ADS)

    La Hoz, C.; Belyey, V.

    2012-12-01

    EISCAT_3D is a project to build the next generation of incoherent scatter radars endowed with multiple 3-dimensional capabilities that will replace the current EISCAT radars in Northern Scandinavia. Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. To demonstrate the feasibility of the antenna configurations and the imaging inversion algorithms a simulation of synthetic incoherent scattering data has been performed. The simulation algorithm incorporates the ability to control the background plasma parameters with non-homogeneous, non-stationary components over an extended 3-dimensional space. Control over the positions of a number of separated receiving antennas, their signal-to-noise-ratios and arriving phases allows realistic simulation of a multi-baseline interferometric imaging radar system. The resulting simulated data is fed into various inversion algorithms. This simulation package is a powerful tool to evaluate various antenna configurations and inversion algorithms. Results applied to realistic design alternatives of EISCAT_3D will be described.

  2. Simulation of Image Performance Characteristics of the Landsat Data Continuity Mission (LDCM) Thermal Infrared Sensor (TIRS)

    NASA Technical Reports Server (NTRS)

    Schott, John; Gerace, Aaron; Brown, Scott; Gartley, Michael; Montanaro, Matthew; Reuter, Dennis C.

    2012-01-01

    The next Landsat satellite, which is scheduled for launch in early 2013, will carry two instruments: the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS). Significant design changes over previous Landsat instruments have been made to these sensors to potentially enhance the quality of Landsat image data. TIRS, which is the focus of this study, is a dual-band instrument that uses a push-broom style architecture to collect data. To help understand the impact of design trades during instrument build, an effort was initiated to model TIRS imagery. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool was used to produce synthetic "on-orbit" TIRS data with detailed radiometric, geometric, and digital image characteristics. This work presents several studies that used DIRSIG simulated TIRS data to test the impact of engineering performance data on image quality in an effort to determine if the image data meet specifications or, in the event that they do not, to determine if the resulting image data are still acceptable.

  3. Reduction of artifacts in computer simulation of breast Cooper's ligaments

    NASA Astrophysics Data System (ADS)

    Pokrajac, David D.; Kuperavage, Adam; Maidment, Andrew D. A.; Bakic, Predrag R.

    2016-03-01

    Anthropomorphic software breast phantoms have been introduced as a tool for quantitative validation of breast imaging systems. Efficacy of the validation results depends on the realism of phantom images. The recursive partitioning algorithm based upon the octree simulation has been demonstrated as versatile and capable of efficiently generating large number of phantoms to support virtual clinical trials of breast imaging. Previously, we have observed specific artifacts, (here labeled "dents") on the boundaries of simulated Cooper's ligaments. In this work, we have demonstrated that these "dents" result from the approximate determination of the closest simulated ligament to an examined subvolume (i.e., octree node) of the phantom. We propose a modification of the algorithm that determines the closest ligament by considering a pre-specified number of neighboring ligaments selected based upon the functions that govern the shape of ligaments simulated in the subvolume. We have qualitatively and quantitatively demonstrated that the modified algorithm can lead to elimination or reduction of dent artifacts in software phantoms. In a proof-of concept example, we simulated a 450 ml phantom with 333 compartments at 100 micrometer resolution. After the proposed modification, we corrected 148,105 dents, with an average size of 5.27 voxels (5.27nl). We have also qualitatively analyzed the corresponding improvement in the appearance of simulated mammographic images. The proposed algorithm leads to reduction of linear and star-like artifacts in simulated phantom projections, which can be attributed to dents. Analysis of a larger number of phantoms is ongoing.

  4. Measurements and simulations analysing the noise behaviour of grating-based X-ray phase-contrast imaging

    NASA Astrophysics Data System (ADS)

    Weber, T.; Bartl, P.; Durst, J.; Haas, W.; Michel, T.; Ritter, A.; Anton, G.

    2011-08-01

    In the last decades, phase-contrast imaging using a Talbot-Lau grating interferometer is possible even with a low-brilliance X-ray source. With the potential of increasing the soft-tissue contrast, this method is on its way into medical imaging. For this purpose, the knowledge of the underlying physics of this technique is necessary.With this paper, we would like to contribute to the understanding of grating-based phase-contrast imaging by presenting results on measurements and simulations regarding the noise behaviour of the differential phases.These measurements were done using a microfocus X-ray tube with a hybrid, photon-counting, semiconductor Medipix2 detector. The additional simulations were performed by our in-house developed phase-contrast simulation tool “SPHINX”, combining both wave and particle contributions of the simulated photons.The results obtained by both of these methods show the same behaviour. Increasing the number of photons leads to a linear decrease of the standard deviation of the phase. The number of used phase steps has no influence on the standard deviation, if the total number of photons is held constant.Furthermore, the probability density function (pdf) of the reconstructed differential phases was analysed. It turned out that the so-called von Mises distribution is the physically correct pdf, which was also confirmed by measurements.This information advances the understanding of grating-based phase-contrast imaging and can be used to improve image quality.

  5. Usefulness of ERTS-1 satellite imagery as a data-gathering tool by resource managers in the Bureau of Land Management. [Arizona, California, Oregon, and Alaska

    NASA Technical Reports Server (NTRS)

    Bentley, R. G.

    1974-01-01

    ERTS-1 satellite imagery can be an effective data-gathering tool for resource managers. Techniques are developed which allow managers to visually analyze simulated color infrared composite images to map perennial and ephemeral (annual) plant communities. Tentative results indicate that ephemeral plant growth and development and potential to produce forage can be monitored.

  6. Quantitative Segmentation of Fluorescence Microscopy Images of Heterogeneous Tissue: Application to the Detection of Residual Disease in Tumor Margins

    PubMed Central

    Mueller, Jenna L.; Harmany, Zachary T.; Mito, Jeffrey K.; Kennedy, Stephanie A.; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G.; Willett, Rebecca M.; Brown, J. Quincy; Ramanujam, Nimmi

    2013-01-01

    Purpose To develop a robust tool for quantitative in situ pathology that allows visualization of heterogeneous tissue morphology and segmentation and quantification of image features. Materials and Methods Tissue excised from a genetically engineered mouse model of sarcoma was imaged using a subcellular resolution microendoscope after topical application of a fluorescent anatomical contrast agent: acriflavine. An algorithm based on sparse component analysis (SCA) and the circle transform (CT) was developed for image segmentation and quantification of distinct tissue types. The accuracy of our approach was quantified through simulations of tumor and muscle images. Specifically, tumor, muscle, and tumor+muscle tissue images were simulated because these tissue types were most commonly observed in sarcoma margins. Simulations were based on tissue characteristics observed in pathology slides. The potential clinical utility of our approach was evaluated by imaging excised margins and the tumor bed in a cohort of mice after surgical resection of sarcoma. Results Simulation experiments revealed that SCA+CT achieved the lowest errors for larger nuclear sizes and for higher contrast ratios (nuclei intensity/background intensity). For imaging of tumor margins, SCA+CT effectively isolated nuclei from tumor, muscle, adipose, and tumor+muscle tissue types. Differences in density were correctly identified with SCA+CT in a cohort of ex vivo and in vivo images, thus illustrating the diagnostic potential of our approach. Conclusion The combination of a subcellular-resolution microendoscope, acriflavine staining, and SCA+CT can be used to accurately isolate nuclei and quantify their density in anatomical images of heterogeneous tissue. PMID:23824589

  7. Quantitative Segmentation of Fluorescence Microscopy Images of Heterogeneous Tissue: Application to the Detection of Residual Disease in Tumor Margins.

    PubMed

    Mueller, Jenna L; Harmany, Zachary T; Mito, Jeffrey K; Kennedy, Stephanie A; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G; Willett, Rebecca M; Brown, J Quincy; Ramanujam, Nimmi

    2013-01-01

    To develop a robust tool for quantitative in situ pathology that allows visualization of heterogeneous tissue morphology and segmentation and quantification of image features. TISSUE EXCISED FROM A GENETICALLY ENGINEERED MOUSE MODEL OF SARCOMA WAS IMAGED USING A SUBCELLULAR RESOLUTION MICROENDOSCOPE AFTER TOPICAL APPLICATION OF A FLUORESCENT ANATOMICAL CONTRAST AGENT: acriflavine. An algorithm based on sparse component analysis (SCA) and the circle transform (CT) was developed for image segmentation and quantification of distinct tissue types. The accuracy of our approach was quantified through simulations of tumor and muscle images. Specifically, tumor, muscle, and tumor+muscle tissue images were simulated because these tissue types were most commonly observed in sarcoma margins. Simulations were based on tissue characteristics observed in pathology slides. The potential clinical utility of our approach was evaluated by imaging excised margins and the tumor bed in a cohort of mice after surgical resection of sarcoma. Simulation experiments revealed that SCA+CT achieved the lowest errors for larger nuclear sizes and for higher contrast ratios (nuclei intensity/background intensity). For imaging of tumor margins, SCA+CT effectively isolated nuclei from tumor, muscle, adipose, and tumor+muscle tissue types. Differences in density were correctly identified with SCA+CT in a cohort of ex vivo and in vivo images, thus illustrating the diagnostic potential of our approach. The combination of a subcellular-resolution microendoscope, acriflavine staining, and SCA+CT can be used to accurately isolate nuclei and quantify their density in anatomical images of heterogeneous tissue.

  8. Virtual reality simulators for gastrointestinal endoscopy training

    PubMed Central

    Triantafyllou, Konstantinos; Lazaridis, Lazaros Dimitrios; Dimitriadis, George D

    2014-01-01

    The use of simulators as educational tools for medical procedures is spreading rapidly and many efforts have been made for their implementation in gastrointestinal endoscopy training. Endoscopy simulation training has been suggested for ascertaining patient safety while positively influencing the trainees’ learning curve. Virtual simulators are the most promising tool among all available types of simulators. These integrated modalities offer a human-like endoscopy experience by combining virtual images of the gastrointestinal tract and haptic realism with using a customized endoscope. From their first steps in the 1980s until today, research involving virtual endoscopic simulators can be divided in two categories: investigation of the impact of virtual simulator training in acquiring endoscopy skills and measuring competence. Emphasis should also be given to the financial impact of their implementation in endoscopy, including the cost of these state-of-the-art simulators and the potential economic benefits from their usage. Advances in technology will contribute to the upgrade of existing models and the development of new ones; while further research should be carried out to discover new fields of application. PMID:24527175

  9. Simulations For Investigating the Contrast Mechanism of Biological Cells with High Frequency Scanning Acoustic Microscopy

    NASA Astrophysics Data System (ADS)

    Juntarapaso, Yada

    Scanning Acoustic Microscopy (SAM) is one of the most powerful techniques for nondestructive evaluation and it is a promising tool for characterizing the elastic properties of biological tissues/cells. Exploring a single cell is important since there is a connection between single cell biomechanics and human cancer. Scanning acoustic microscopy (SAM) has been accepted and extensively utilized for acoustical cellular and tissue imaging including measurements of the mechanical and elastic properties of biological specimens. SAM provides superb advantages in that it is non-invasive, can measure mechanical properties of biological cells or tissues, and fixation/chemical staining is not necessary. The first objective of this research is to develop a program for simulating the images and contrast mechanism obtained by high-frequency SAM. Computer simulation algorithms based on MatlabRTM were built for simulating the images and contrast mechanisms. The mechanical properties of HeLa and MCF-7 cells were computed from the measurement data of the output signal amplitude as a function of distance from the focal planes of the acoustics lens which is known as V(z) . Algorithms for simulating V(z) responses involved the calculation of the reflectance function and were created based on ray theory and wave theory. The second objective is to design transducer arrays for SAM. Theoretical simulations based on Field II(c) programs of the high frequency ultrasound array designs were performed to enhance image resolution and volumetric imaging capabilities. Phased array beam forming and dynamic apodization and focusing were employed in the simulations. The new transducer array design will be state-of-the-art in improving the performance of SAM by electronic scanning and potentially providing a 4-D image of the specimen.

  10. A High Fidelity Approach to Data Simulation for Space Situational Awareness Missions

    NASA Astrophysics Data System (ADS)

    Hagerty, S.; Ellis, H., Jr.

    2016-09-01

    Space Situational Awareness (SSA) is vital to maintaining our Space Superiority. A high fidelity, time-based simulation tool, PROXOR™ (Proximity Operations and Rendering), supports SSA by generating realistic mission scenarios including sensor frame data with corresponding truth. This is a unique and critical tool for supporting mission architecture studies, new capability (algorithm) development, current/future capability performance analysis, and mission performance prediction. PROXOR™ provides a flexible architecture for sensor and resident space object (RSO) orbital motion and attitude control that simulates SSA, rendezvous and proximity operations scenarios. The major elements of interest are based on the ability to accurately simulate all aspects of the RSO model, viewing geometry, imaging optics, sensor detector, and environmental conditions. These capabilities enhance the realism of mission scenario models and generated mission image data. As an input, PROXOR™ uses a library of 3-D satellite models containing 10+ satellites, including low-earth orbit (e.g., DMSP) and geostationary (e.g., Intelsat) spacecraft, where the spacecraft surface properties are those of actual materials and include Phong and Maxwell-Beard bidirectional reflectance distribution function (BRDF) coefficients for accurate radiometric modeling. We calculate the inertial attitude, the changing solar and Earth illumination angles of the satellite, and the viewing angles from the sensor as we propagate the RSO in its orbit. The synthetic satellite image is rendered at high resolution and aggregated to the focal plane resolution resulting in accurate radiometry even when the RSO is a point source. The sensor model includes optical effects from the imaging system [point spread function (PSF) includes aberrations, obscurations, support structures, defocus], detector effects (CCD blooming, left/right bias, fixed pattern noise, image persistence, shot noise, read noise, and quantization noise), and environmental effects (radiation hits with selectable angular distributions and 4-layer atmospheric turbulence model for ground based sensors). We have developed an accurate flash Light Detection and Ranging (LIDAR) model that supports reconstruction of 3-dimensional information on the RSO. PROXOR™ contains many important imaging effects such as intra-frame smear, realized by oversampling the image in time and capturing target motion and jitter during the integration time.

  11. Implementing a prototyping network for injection moulded imaging lenses in Finland

    NASA Astrophysics Data System (ADS)

    Keränen, K.; Mäkinen, J.-T.; Pääkkönen, E. J.; Koponen, M.; Karttunen, M.; Hiltunen, J.; Karioja, P.

    2005-10-01

    A network for prototyping imaging lenses using injection moulding was established in Finland. The network consists of several academic and industrial partners capable of designing, processing and characterising imaging lenses produced by injection moulding technology. In order to validate the operation of the network a demonstrator lens was produced. The process steps included in the manufacturing were lens specification, designing and modelling, material selection, mould tooling, moulding process simulation, injection moulding and characterisation. A magnifying imaging singlet lens to be used as an add-on in a camera phone was selected as a demonstrator. The design of the add-on lens proved to be somewhat challenging, but a double aspheric singlet lens design fulfilling nearly the requirement specification was produced. In the material selection task the overall characteristics profile of polymethyl methacrylate (PMMA) material was seen to be the most fitting to the pilot case. It is a low cost material with good moulding properties and therefore it was selected as a material for the pilot lens. Lens mould design was performed using I-DEAS and tested by using MoldFlow 3D injection moulding simulation software. The simulations predicted the achievable lens quality in the processing, when using a two-cavity mould design. First cavity was tooled directly into the mould plate and the second cavity was made by tooling separate insert pieces for the mould. Mould material was steel and the inserts were made from Moldmax copper alloy. Parts were tooled with high speed milling machines. Insert pieces were hand polished after tooling. Prototype lenses were injection moulded using two PMMA grades, namely 6N and 7N. Different process parameters were also experimented in the injection moulding test runs. Prototypes were characterised by measuring mechanical dimensions, surface profile, roughness and MTF of the lenses. Characterisations showed that the lens surface RMS roughness was 30-50 nm and the profile deviation was 5 μm from the design at a distance of 0.3 mm from the lens vertex. These manufacturing defects caused that the measured MTF values were lower than designed. The lens overall quality, however, was adequate to demonstrate the concept successfully. Through the implementation of the demonstrator lens we could test effectively different stages of the manufacturing process and get information about process component weight and risk factors and validate the overall performance of the network.

  12. Impact of Image Noise on Gamma Index Calculation

    NASA Astrophysics Data System (ADS)

    Chen, M.; Mo, X.; Parnell, D.; Olivera, G.; Galmarini, D.; Lu, W.

    2014-03-01

    Purpose: The Gamma Index defines an asymmetric metric between the evaluated image and the reference image. It provides a quantitative comparison that can be used to indicate sample-wised pass/fail on the agreement of the two images. The Gamma passing/failing rate has become an important clinical evaluation tool. However, the presence of noise in the evaluated and/or reference images may change the Gamma Index, hence the passing/failing rate, and further, clinical decisions. In this work, we systematically studied the impact of the image noise on the Gamma Index calculation. Methods: We used both analytic formulation and numerical calculations in our study. The numerical calculations included simulations and clinical images. Three different noise scenarios were studied in simulations: noise in reference images only, in evaluated images only, and in both. Both white and spatially correlated noises of various magnitudes were simulated. For clinical images of various noise levels, the Gamma Index of measurement against calculation, calculation against measurement, and measurement against measurement, were evaluated. Results: Numerical calculations for both the simulation and clinical data agreed with the analytic formulations, and the clinical data agreed with the simulations. For the Gamma Index of measurement against calculation, its distribution has an increased mean and an increased standard deviation as the noise increases. On the contrary, for the Gamma index of calculation against measurement, its distribution has a decreased mean and stabilized standard deviation as the noise increases. White noise has greater impact on the Gamma Index than spatially correlated noise. Conclusions: The noise has significant impact on the Gamma Index calculation and the impact is asymmetric. The Gamma Index should be reported along with the noise levels in both reference and evaluated images. Reporting of the Gamma Index with switched roles of the images as reference and evaluated images or some composite metrics would be a good practice.

  13. Software phantom with realistic speckle modeling for validation of image analysis methods in echocardiography

    NASA Astrophysics Data System (ADS)

    Law, Yuen C.; Tenbrinck, Daniel; Jiang, Xiaoyi; Kuhlen, Torsten

    2014-03-01

    Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.

  14. Computerized assessment of body image in anorexia nervosa and bulimia nervosa: comparison with standardized body image assessment tool.

    PubMed

    Caspi, Asaf; Amiaz, Revital; Davidson, Noa; Czerniak, Efrat; Gur, Eitan; Kiryati, Nahum; Harari, Daniel; Furst, Miriam; Stein, Daniel

    2017-02-01

    Body image disturbances are a prominent feature of eating disorders (EDs). Our aim was to test and evaluate a computerized assessment of body image (CABI), to compare the body image disturbances in different ED types, and to assess the factors affecting body image. The body image of 22 individuals undergoing inpatient treatment with restricting anorexia nervosa (AN-R), 22 with binge/purge AN (AN-B/P), 20 with bulimia nervosa (BN), and 41 healthy controls was assessed using the Contour Drawing Rating Scale (CDRS), the CABI, which simulated the participants' self-image in different levels of weight changes, and the Eating Disorder Inventory-2-Body Dissatisfaction (EDI-2-BD) scale. Severity of depression and anxiety was also assessed. Significant differences were found among the three scales assessing body image, although most of their dimensions differentiated between patients with EDs and controls. Our findings support the use of the CABI in the comparison of body image disturbances in patients with EDs vs. Moreover, the use of different assessment tools allows for a better understanding of the differences in body image disturbances in different ED types.

  15. A Wigner-based ray-tracing method for imaging simulations

    NASA Astrophysics Data System (ADS)

    Mout, B. M.; Wick, M.; Bociort, F.; Urbach, H. P.

    2015-09-01

    The Wigner Distribution Function (WDF) forms an alternative representation of the optical field. It can be a valuable tool for understanding and classifying optical systems. Furthermore, it possesses properties that make it suitable for optical simulations: both the intensity and the angular spectrum can be easily obtained from the WDF and the WDF remains constant along the paths of paraxial geometrical rays. In this study we use these properties by implementing a numerical Wigner-Based Ray-Tracing method (WBRT) to simulate diffraction effects at apertures in free-space and in imaging systems. Both paraxial and non-paraxial systems are considered and the results are compared with numerical implementations of the Rayleigh-Sommerfeld and Fresnel diffraction integrals to investigate the limits of the applicability of this approach. The results of the different methods are in good agreement when simulating free-space diffraction or calculating point spread functions (PSFs) for aberration-free imaging systems, even at numerical apertures exceeding the paraxial regime. For imaging systems with aberrations, the PSFs of WBRT diverge from the results using diffraction integrals. For larger aberrations WBRT predicts negative intensities, suggesting that this model is unable to deal with aberrations.

  16. Simulating Lattice Image of Suspended Graphene Taken by Helium Ion Microscopy

    NASA Astrophysics Data System (ADS)

    Miyamoto, Yoshiyuki; Zhang, Hong; Rubio, Angel

    2013-03-01

    Atomic scale image in nano-scale helps us to characterize property of graphene, and performance of high-resolution transmission electron microscopy (HRTEM) is significant, so far. While a tool without pre-treatment of samples is demanded in practice. Helium ion microscopy (HIM), firstly reported by Word et. al. in 2006, was applied for monitoring graphene in device structure (Lumme, et. al., 2009). Motivated by recent HIM explorations, we examined the possibility of taking lattice image of suspended graphene by HIM. The intensity of secondary emitted electron is recorded as a profile of scanned He+-beam in HIM measurement. We mimicked this situation by performing electron-ion dynamics based on the first-principles simulation within the time-dependent density functional theory. He+ ion collision on single graphene sheet at several impact points were simulated and we found that the amount of secondary emitted electron from graphene reflected the valence charge distribution of the graphene sheet. Therefore HIM using atomically thin He-beam should be able to provide the lattice image, and we propose that an experiment generating ultra-thin He+ ion beam (Rezeq et. al., 2006) should be combined with HIM technique. All calculations were performed by using the Earth Simulator.

  17. Design and Construction of a Positron Emission Tomography (PET) Unit and Medical Applications with GEANT Detector Simulation Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagoz, Muge

    1998-01-01

    In order to investigate the possibility of the construction of a sample PET coincidence unit in our HEP laboratory, a setup with two face to face PMTs and two 2x8 Csi(Tl) scintillator matrices has been constructed. In this setup, 1-D projections of a pointlike 22 Na positron source at different angles have been measured. Using these projections a 2-D image has been formed. Monte Carlo studies of this setup have been implemented using the detector simulation tool in CERN program library, GEANT. Again with GEANT a sample human body is created to study the effects of proton therapy. Utilization ofmore » the simulation as a pretherapy tool is also investigated.« less

  18. Estimating occupancy and abundance using aerial images with imperfect detection

    USGS Publications Warehouse

    Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Bower, Michael R.

    2017-01-01

    Species distribution and abundance are critical population characteristics for efficient management, conservation, and ecological insight. Point process models are a powerful tool for modelling distribution and abundance, and can incorporate many data types, including count data, presence-absence data, and presence-only data. Aerial photographic images are a natural tool for collecting data to fit point process models, but aerial images do not always capture all animals that are present at a site. Methods for estimating detection probability for aerial surveys usually include collecting auxiliary data to estimate the proportion of time animals are available to be detected.We developed an approach for fitting point process models using an N-mixture model framework to estimate detection probability for aerial occupancy and abundance surveys. Our method uses multiple aerial images taken of animals at the same spatial location to provide temporal replication of sample sites. The intersection of the images provide multiple counts of individuals at different times. We examined this approach using both simulated and real data of sea otters (Enhydra lutris kenyoni) in Glacier Bay National Park, southeastern Alaska.Using our proposed methods, we estimated detection probability of sea otters to be 0.76, the same as visual aerial surveys that have been used in the past. Further, simulations demonstrated that our approach is a promising tool for estimating occupancy, abundance, and detection probability from aerial photographic surveys.Our methods can be readily extended to data collected using unmanned aerial vehicles, as technology and regulations permit. The generality of our methods for other aerial surveys depends on how well surveys can be designed to meet the assumptions of N-mixture models.

  19. CytoSpectre: a tool for spectral analysis of oriented structures on cellular and subcellular levels.

    PubMed

    Kartasalo, Kimmo; Pölönen, Risto-Pekka; Ojala, Marisa; Rasku, Jyrki; Lekkala, Jukka; Aalto-Setälä, Katriina; Kallio, Pasi

    2015-10-26

    Orientation and the degree of isotropy are important in many biological systems such as the sarcomeres of cardiomyocytes and other fibrillar structures of the cytoskeleton. Image based analysis of such structures is often limited to qualitative evaluation by human experts, hampering the throughput, repeatability and reliability of the analyses. Software tools are not readily available for this purpose and the existing methods typically rely at least partly on manual operation. We developed CytoSpectre, an automated tool based on spectral analysis, allowing the quantification of orientation and also size distributions of structures in microscopy images. CytoSpectre utilizes the Fourier transform to estimate the power spectrum of an image and based on the spectrum, computes parameter values describing, among others, the mean orientation, isotropy and size of target structures. The analysis can be further tuned to focus on targets of particular size at cellular or subcellular scales. The software can be operated via a graphical user interface without any programming expertise. We analyzed the performance of CytoSpectre by extensive simulations using artificial images, by benchmarking against FibrilTool and by comparisons with manual measurements performed for real images by a panel of human experts. The software was found to be tolerant against noise and blurring and superior to FibrilTool when analyzing realistic targets with degraded image quality. The analysis of real images indicated general good agreement between computational and manual results while also revealing notable expert-to-expert variation. Moreover, the experiment showed that CytoSpectre can handle images obtained of different cell types using different microscopy techniques. Finally, we studied the effect of mechanical stretching on cardiomyocytes to demonstrate the software in an actual experiment and observed changes in cellular orientation in response to stretching. CytoSpectre, a versatile, easy-to-use software tool for spectral analysis of microscopy images was developed. The tool is compatible with most 2D images and can be used to analyze targets at different scales. We expect the tool to be useful in diverse applications dealing with structures whose orientation and size distributions are of interest. While designed for the biological field, the software could also be useful in non-biological applications.

  20. Democratization of Nanoscale Imaging and Sensing Tools Using Photonics

    DTIC Science & Technology

    2015-06-12

    representative angular scattering pattern recorded on the cell phone. (b) Measured (black) and Mie theory fitted (red) angle-dependent scattering...sample onto the cell phone image sensor (Figure 3a). The one- dimensional radial scattering profile was then fitted with Mie theory to estimate the...quantitatively well-understood, as the experimental measure- ments closely match the predictions of our theory and simulations.69,84 Furthermore, the signal

  1. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov

    2014-12-15

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: Themore » visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.« less

  2. Development of an Image-based Multi-Scale Finite Element Approach to Predict Fatigue Damage in Asphalt Mixtures

    NASA Astrophysics Data System (ADS)

    Arshadi, Amir

    Image-based simulation of complex materials is a very important tool for understanding their mechanical behavior and an effective tool for successful design of composite materials. In this thesis an image-based multi-scale finite element approach is developed to predict the mechanical properties of asphalt mixtures. In this approach the "up-scaling" and homogenization of each scale to the next is critically designed to improve accuracy. In addition to this multi-scale efficiency, this study introduces an approach for consideration of particle contacts at each of the scales in which mineral particles exist. One of the most important pavement distresses which seriously affects the pavement performance is fatigue cracking. As this cracking generally takes place in the binder phase of the asphalt mixture, the binder fatigue behavior is assumed to be one of the main factors influencing the overall pavement fatigue performance. It is also known that aggregate gradation, mixture volumetric properties, and filler type and concentration can affect damage initiation and progression in the asphalt mixtures. This study was conducted to develop a tool to characterize the damage properties of the asphalt mixtures at all scales. In the present study the Viscoelastic continuum damage model is implemented into the well-known finite element software ABAQUS via the user material subroutine (UMAT) in order to simulate the state of damage in the binder phase under the repeated uniaxial sinusoidal loading. The inputs are based on the experimentally derived measurements for the binder properties. For the scales of mastic and mortar, the artificially 2-Dimensional images of mastic and mortar scales were generated and used to characterize the properties of those scales. Finally, the 2D scanned images of asphalt mixtures are used to study the asphalt mixture fatigue behavior under loading. In order to validate the proposed model, the experimental test results and the simulation results were compared. Indirect tensile fatigue tests were conducted on asphalt mixture samples. A comparison between experimental results and the results from simulation shows that the model developed in this study is capable of predicting the effect of asphalt binder properties and aggregate micro-structure on mechanical behavior of asphalt concrete under loading.

  3. The Precision Formation Flying Integrated Analysis Tool (PFFIAT)

    NASA Technical Reports Server (NTRS)

    Stoneking, Eric; Lyon, Richard G.; Sears, Edie; Lu, Victor

    2004-01-01

    Several space missions presently in the concept phase (e.g. Stellar Imager, Submillimeter Probe of Evolutionary Cosmic Structure, Terrestrial Planet Finder) plan to use multiple spacecraft flying in precise formation to synthesize unprecedently large aperture optical systems. These architectures present challenges to the attitude and position determination and control system; optical performance is directly coupled to spacecraft pointing with typical control requirements being on the scale of milliarcseconds and nanometers. To investigate control strategies, rejection of environmental disturbances, and sensor and actuator requirements, a capability is needed to model both the dynamical and optical behavior of such a distributed telescope system. This paper describes work ongoing at NASA Goddard Space Flight Center toward the integration of a set of optical analysis tools (Optical System Characterization and Analysis Research software, or OSCAR) with the Formation Flying Test Bed (FFTB). The resulting system is called the Precision Formation Flying Integrated Analysis Tool (PFFIAT), and it provides the capability to simulate closed-loop control of optical systems composed of elements mounted on multiple spacecraft. The attitude and translation spacecraft dynamics are simulated in the FFTB, including effects of the space environment (e.g. solar radiation pressure, differential orbital motion). The resulting optical configuration is then processed by OSCAR to determine an optical image. From this image, wavefront sensing (e.g. phase retrieval) techniques are being developed to derive attitude and position errors. These error signals will be fed back to the spacecraft control systems, completing the control loop. A simple case study is presented to demonstrate the present capabilities of the tool.

  4. The Precision Formation Flying Integrated Analysis Tool (PFFIAT)

    NASA Technical Reports Server (NTRS)

    Stoneking, Eric; Lyon, Richard G.; Sears, Edie; Lu, Victor

    2004-01-01

    Several space missions presently in the concept phase (e.g. Stellar Imager, Sub- millimeter Probe of Evolutionary Cosmic Structure, Terrestrial Planet Finder) plan to use multiple spacecraft flying in precise formation to synthesize unprecedently large aperture optical systems. These architectures present challenges to the attitude and position determination and control system; optical performance is directly coupled to spacecraft pointing with typical control requirements being on the scale of milliarcseconds and nanometers. To investigate control strategies, rejection of environmental disturbances, and sensor and actuator requirements, a capability is needed to model both the dynamical and optical behavior of such a distributed telescope system. This paper describes work ongoing at NASA Goddard Space Flight Center toward the integration of a set of optical analysis tools (Optical System Characterization and Analysis Research software, or OSCAR) with the Formation J?lying Test Bed (FFTB). The resulting system is called the Precision Formation Flying Integrated Analysis Tool (PFFIAT), and it provides the capability to simulate closed-loop control of optical systems composed of elements mounted on multiple spacecraft. The attitude and translation spacecraft dynamics are simulated in the FFTB, including effects of the space environment (e.g. solar radiation pressure, differential orbital motion). The resulting optical configuration is then processed by OSCAR to determine an optical image. From this image, wavefront sensing (e.g. phase retrieval) techniques are being developed to derive attitude and position errors. These error signals will be fed back to the spacecraft control systems, completing the control loop. A simple case study is presented to demonstrate the present capabilities of the tool.

  5. Optimization and Simulation of SLM Process for High Density H13 Tool Steel Parts

    NASA Astrophysics Data System (ADS)

    Laakso, Petri; Riipinen, Tuomas; Laukkanen, Anssi; Andersson, Tom; Jokinen, Antero; Revuelta, Alejandro; Ruusuvuori, Kimmo

    This paper demonstrates the successful printing and optimization of processing parameters of high-strength H13 tool steel by Selective Laser Melting (SLM). D-Optimal Design of Experiments (DOE) approach is used for parameter optimization of laser power, scanning speed and hatch width. With 50 test samples (1×1×1cm) we establish parameter windows for these three parameters in relation to part density. The calculated numerical model is found to be in good agreement with the density data obtained from the samples using image analysis. A thermomechanical finite element simulation model is constructed of the SLM process and validated by comparing the calculated densities retrieved from the model with the experimentally determined densities. With the simulation tool one can explore the effect of different parameters on density before making any printed samples. Establishing a parameter window provides the user with freedom for parameter selection such as choosing parameters that result in fastest print speed.

  6. Automated evaluation of AIMS images: an approach to minimize evaluation variability

    NASA Astrophysics Data System (ADS)

    Dürr, Arndt C.; Arndt, Martin; Fiebig, Jan; Weiss, Samuel

    2006-05-01

    Defect disposition and qualification with stepper simulating AIMS tools on advanced masks of the 90nm node and below is key to match the customer's expectations for "defect free" masks, i.e. masks containing only non-printing design variations. The recently available AIMS tools allow for a large degree of automated measurements enhancing the throughput of masks and hence reducing cycle time - up to 50 images can be recorded per hour. However, this amount of data still has to be evaluated by hand which is not only time-consuming but also error prone and exhibits a variability depending on the person doing the evaluation which adds to the tool intrinsic variability and decreases the reliability of the evaluation. In this paper we present the results of an MatLAB based algorithm which automatically evaluates AIMS images. We investigate its capabilities regarding throughput, reliability and matching with handmade evaluation for a large variety of dark and clear defects and discuss the limitations of an automated AIMS evaluation algorithm.

  7. Simulation supported POD for RT test case-concept and modeling

    NASA Astrophysics Data System (ADS)

    Gollwitzer, C.; Bellon, C.; Deresch, A.; Ewert, U.; Jaenisch, G.-R.; Zscherpel, U.; Mistral, Q.

    2012-05-01

    Within the framework of the European project PICASSO, the radiographic simulator aRTist (analytical Radiographic Testing inspection simulation tool) developed by BAM has been extended for reliability assessment of film and digital radiography. NDT of safety relevant components of aerospace industry requires the proof of probability of detection (POD) of the inspection. Modeling tools can reduce the expense of such extended, time consuming NDT trials, if the result of simulation fits to the experiment. Our analytic simulation tool consists of three modules for the description of the radiation source, the interaction of radiation with test pieces and flaws, and the detection process with special focus on film and digital industrial radiography. It features high processing speed with near-interactive frame rates and a high level of realism. A concept has been developed as well as a software extension for reliability investigations, completed by a user interface for planning automatic simulations with varying parameters and defects. Furthermore, an automatic image analysis procedure is included to evaluate the defect visibility. The radiographic modeling from 3D CAD of aero engine components and quality test samples are compared as a precondition for real trials. This enables the evaluation and optimization of film replacement for application of modern digital equipment for economical NDT and defined POD.

  8. Mueller matrix polarimetry for characterizing microstructural variation of nude mouse skin during tissue optical clearing.

    PubMed

    Chen, Dongsheng; Zeng, Nan; Xie, Qiaolin; He, Honghui; Tuchin, Valery V; Ma, Hui

    2017-08-01

    We investigate the polarization features corresponding to changes in the microstructure of nude mouse skin during immersion in a glycerol solution. By comparing the Mueller matrix imaging experiments and Monte Carlo simulations, we examine in detail how the Mueller matrix elements vary with the immersion time. The results indicate that the polarization features represented by Mueller matrix elements m22&m33&m44 and the absolute values of m34&m43 are sensitive to the immersion time. To gain a deeper insight on how the microstructures of the skin vary during the tissue optical clearing (TOC), we set up a sphere-cylinder birefringence model (SCBM) of the skin and carry on simulations corresponding to different TOC mechanisms. The good agreement between the experimental and simulated results confirm that Mueller matrix imaging combined with Monte Carlo simulation is potentially a powerful tool for revealing microscopic features of biological tissues.

  9. Three-Dimensional Visualization with Large Data Sets: A Simulation of Spreading Cortical Depression in Human Brain

    PubMed Central

    Ertürk, Korhan Levent; Şengül, Gökhan

    2012-01-01

    We developed 3D simulation software of human organs/tissues; we developed a database to store the related data, a data management system to manage the created data, and a metadata system for the management of data. This approach provides two benefits: first of all the developed system does not require to keep the patient's/subject's medical images on the system, providing less memory usage. Besides the system also provides 3D simulation and modification options, which will help clinicians to use necessary tools for visualization and modification operations. The developed system is tested in a case study, in which a 3D human brain model is created and simulated from 2D MRI images of a human brain, and we extended the 3D model to include the spreading cortical depression (SCD) wave front, which is an electrical phoneme that is believed to cause the migraine. PMID:23258956

  10. Monte Carlo simulation of inverse geometry x-ray fluoroscopy using a modified MC-GPU framework

    PubMed Central

    Dunkerley, David A. P.; Tomkowiak, Michael T.; Slagowski, Jordan M.; McCabe, Bradley P.; Funk, Tobias; Speidel, Michael A.

    2015-01-01

    Scanning-Beam Digital X-ray (SBDX) is a technology for low-dose fluoroscopy that employs inverse geometry x-ray beam scanning. To assist with rapid modeling of inverse geometry x-ray systems, we have developed a Monte Carlo (MC) simulation tool based on the MC-GPU framework. MC-GPU version 1.3 was modified to implement a 2D array of focal spot positions on a plane, with individually adjustable x-ray outputs, each producing a narrow x-ray beam directed toward a stationary photon-counting detector array. Geometric accuracy and blurring behavior in tomosynthesis reconstructions were evaluated from simulated images of a 3D arrangement of spheres. The artifact spread function from simulation agreed with experiment to within 1.6% (rRMSD). Detected x-ray scatter fraction was simulated for two SBDX detector geometries and compared to experiments. For the current SBDX prototype (10.6 cm wide by 5.3 cm tall detector), x-ray scatter fraction measured 2.8–6.4% (18.6–31.5 cm acrylic, 100 kV), versus 2.1–4.5% in MC simulation. Experimental trends in scatter versus detector size and phantom thickness were observed in simulation. For dose evaluation, an anthropomorphic phantom was imaged using regular and regional adaptive exposure (RAE) scanning. The reduction in kerma-area-product resulting from RAE scanning was 45% in radiochromic film measurements, versus 46% in simulation. The integral kerma calculated from TLD measurement points within the phantom was 57% lower when using RAE, versus 61% lower in simulation. This MC tool may be used to estimate tomographic blur, detected scatter, and dose distributions when developing inverse geometry x-ray systems. PMID:26113765

  11. Monte Carlo simulation of inverse geometry x-ray fluoroscopy using a modified MC-GPU framework.

    PubMed

    Dunkerley, David A P; Tomkowiak, Michael T; Slagowski, Jordan M; McCabe, Bradley P; Funk, Tobias; Speidel, Michael A

    2015-02-21

    Scanning-Beam Digital X-ray (SBDX) is a technology for low-dose fluoroscopy that employs inverse geometry x-ray beam scanning. To assist with rapid modeling of inverse geometry x-ray systems, we have developed a Monte Carlo (MC) simulation tool based on the MC-GPU framework. MC-GPU version 1.3 was modified to implement a 2D array of focal spot positions on a plane, with individually adjustable x-ray outputs, each producing a narrow x-ray beam directed toward a stationary photon-counting detector array. Geometric accuracy and blurring behavior in tomosynthesis reconstructions were evaluated from simulated images of a 3D arrangement of spheres. The artifact spread function from simulation agreed with experiment to within 1.6% (rRMSD). Detected x-ray scatter fraction was simulated for two SBDX detector geometries and compared to experiments. For the current SBDX prototype (10.6 cm wide by 5.3 cm tall detector), x-ray scatter fraction measured 2.8-6.4% (18.6-31.5 cm acrylic, 100 kV), versus 2.1-4.5% in MC simulation. Experimental trends in scatter versus detector size and phantom thickness were observed in simulation. For dose evaluation, an anthropomorphic phantom was imaged using regular and regional adaptive exposure (RAE) scanning. The reduction in kerma-area-product resulting from RAE scanning was 45% in radiochromic film measurements, versus 46% in simulation. The integral kerma calculated from TLD measurement points within the phantom was 57% lower when using RAE, versus 61% lower in simulation. This MC tool may be used to estimate tomographic blur, detected scatter, and dose distributions when developing inverse geometry x-ray systems.

  12. Perspectives of mid-infrared optical coherence tomography for inspection and micrometrology of industrial ceramics

    PubMed Central

    Su, Rong; Kirillin, Mikhail; Chang, Ernest W.; Sergeeva, Ekaterina; Yun, Seok H.; Mattsson, Lars

    2014-01-01

    Optical coherence tomography (OCT) is a promising tool for detecting micro channels, metal prints, defects and delaminations embedded in alumina and zirconia ceramic layers at hundreds of micrometers beneath surfaces. The effect of surface roughness and scattering of probing radiation within sample on OCT inspection is analyzed from the experimental and simulated OCT images of the ceramic samples with varying surface roughnesses and operating wavelengths. By Monte Carlo simulations of the OCT images in the mid-IR the optimal operating wavelength is found to be 4 µm for the alumina samples and 2 µm for the zirconia samples for achieving sufficient probing depth of about 1 mm. The effects of rough surfaces and dispersion on the detection of the embedded boundaries are discussed. Two types of image artefacts are found in OCT images due to multiple reflections between neighboring boundaries and inhomogeneity of refractive index. PMID:24977838

  13. SU-D-213-03: Towards An Optimized 3D Scintillation Dosimetry Tool for Quality Assurance of Dynamic Radiotherapy Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rilling, M; Centre de Recherche sur le Cancer, Hôtel-Dieu de Québec, Quebec City, QC; Département de radio-oncologie, CHU de Québec, Quebec City, QC

    2015-06-15

    Purpose: The purpose of this work is to simulate a multi-focus plenoptic camera used as the measuring device in a real-time three-dimensional scintillation dosimeter. Simulating and optimizing this realistic optical system will bridge the technological gap between concept validation and a clinically viable tool that can provide highly efficient, accurate and precise measurements for dynamic radiotherapy techniques. Methods: The experimental prototype, previously developed for proof of concept purposes, uses an off-the-shelf multi-focus plenoptic camera. With an array of interleaved microlenses of different focal lengths, this camera records spatial and angular information of light emitted by a plastic scintillator volume. Themore » three distinct microlens focal lengths were determined experimentally for use as baseline parameters by measuring image-to-object magnification for different distances in object space. A simulated plenoptic system was implemented using the non-sequential ray tracing software Zemax: this tool allows complete simulation of multiple optical paths by modeling interactions at interfaces such as scatter, diffraction, reflection and refraction. The active sensor was modeled based on the camera manufacturer specifications by a 2048×2048, 5 µm-pixel pitch sensor. Planar light sources, simulating the plastic scintillator volume, were employed for ray tracing simulations. Results: The microlens focal lengths were determined to be 384, 327 and 290 µm. A realistic multi-focus plenoptic system, with independently defined and optimizable specifications, was fully simulated. A f/2.9 and 54 mm-focal length Double Gauss objective was modeled as the system’s main lens. A three-focal length hexagonal microlens array of 250-µm thickness was designed, acting as an image-relay system between the main lens and sensor. Conclusion: Simulation of a fully modeled multi-focus plenoptic camera enables the decoupled optimization of the main lens and microlens specifications. This work leads the way to improving the 3D dosimeter’s achievable resolution, efficiency and build for providing a quality assurance tool fully meeting clinical needs. M.R. is financially supported by a Master’s Canada Graduate Scholarship from the NSERC. This research is also supported by the NSERC Industrial Research Chair in Optical Design.« less

  14. In-Situ Visualization Experiments with ParaView Cinema in RAGE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kares, Robert John

    2015-10-15

    A previous paper described some numerical experiments performed using the ParaView/Catalyst in-situ visualization infrastructure deployed in the Los Alamos RAGE radiation-hydrodynamics code to produce images from a running large scale 3D ICF simulation. One challenge of the in-situ approach apparent in these experiments was the difficulty of choosing parameters likes isosurface values for the visualizations to be produced from the running simulation without the benefit of prior knowledge of the simulation results and the resultant cost of recomputing in-situ generated images when parameters are chosen suboptimally. A proposed method of addressing this difficulty is to simply render multiple images atmore » runtime with a range of possible parameter values to produce a large database of images and to provide the user with a tool for managing the resulting database of imagery. Recently, ParaView/Catalyst has been extended to include such a capability via the so-called Cinema framework. Here I describe some initial experiments with the first delivery of Cinema and make some recommendations for future extensions of Cinema’s capabilities.« less

  15. Simulation tools for scattering corrections in spectrally resolved x-ray computed tomography using McXtrace

    NASA Astrophysics Data System (ADS)

    Busi, Matteo; Olsen, Ulrik L.; Knudsen, Erik B.; Frisvad, Jeppe R.; Kehres, Jan; Dreier, Erik S.; Khalil, Mohamad; Haldrup, Kristoffer

    2018-03-01

    Spectral computed tomography is an emerging imaging method that involves using recently developed energy discriminating photon-counting detectors (PCDs). This technique enables measurements at isolated high-energy ranges, in which the dominating undergoing interaction between the x-ray and the sample is the incoherent scattering. The scattered radiation causes a loss of contrast in the results, and its correction has proven to be a complex problem, due to its dependence on energy, material composition, and geometry. Monte Carlo simulations can utilize a physical model to estimate the scattering contribution to the signal, at the cost of high computational time. We present a fast Monte Carlo simulation tool, based on McXtrace, to predict the energy resolved radiation being scattered and absorbed by objects of complex shapes. We validate the tool through measurements using a CdTe single PCD (Multix ME-100) and use it for scattering correction in a simulation of a spectral CT. We found the correction to account for up to 7% relative amplification in the reconstructed linear attenuation. It is a useful tool for x-ray CT to obtain a more accurate material discrimination, especially in the high-energy range, where the incoherent scattering interactions become prevailing (>50 keV).

  16. Utilizing Three-Dimensional Printing Technology to Assess the Feasibility of High-Fidelity Synthetic Ventricular Septal Defect Models for Simulation in Medical Education.

    PubMed

    Costello, John P; Olivieri, Laura J; Krieger, Axel; Thabit, Omar; Marshall, M Blair; Yoo, Shi-Joon; Kim, Peter C; Jonas, Richard A; Nath, Dilip S

    2014-07-01

    The current educational approach for teaching congenital heart disease (CHD) anatomy to students involves instructional tools and techniques that have significant limitations. This study sought to assess the feasibility of utilizing present-day three-dimensional (3D) printing technology to create high-fidelity synthetic heart models with ventricular septal defect (VSD) lesions and applying these models to a novel, simulation-based educational curriculum for premedical and medical students. Archived, de-identified magnetic resonance images of five common VSD subtypes were obtained. These cardiac images were then segmented and built into 3D computer-aided design models using Mimics Innovation Suite software. An Objet500 Connex 3D printer was subsequently utilized to print a high-fidelity heart model for each VSD subtype. Next, a simulation-based educational curriculum using these heart models was developed and implemented in the instruction of 29 premedical and medical students. Assessment of this curriculum was undertaken with Likert-type questionnaires. High-fidelity VSD models were successfully created utilizing magnetic resonance imaging data and 3D printing. Following instruction with these high-fidelity models, all students reported significant improvement in knowledge acquisition (P < .0001), knowledge reporting (P < .0001), and structural conceptualization (P < .0001) of VSDs. It is feasible to use present-day 3D printing technology to create high-fidelity heart models with complex intracardiac defects. Furthermore, this tool forms the foundation for an innovative, simulation-based educational approach to teach students about CHD and creates a novel opportunity to stimulate their interest in this field. © The Author(s) 2014.

  17. Application for internal dosimetry using biokinetic distribution of photons based on nuclear medicine images.

    PubMed

    Leal Neto, Viriato; Vieira, José Wilson; Lima, Fernando Roberto de Andrade

    2014-01-01

    This article presents a way to obtain estimates of dose in patients submitted to radiotherapy with basis on the analysis of regions of interest on nuclear medicine images. A software called DoRadIo (Dosimetria das Radiações Ionizantes [Ionizing Radiation Dosimetry]) was developed to receive information about source organs and target organs, generating graphical and numerical results. The nuclear medicine images utilized in the present study were obtained from catalogs provided by medical physicists. The simulations were performed with computational exposure models consisting of voxel phantoms coupled with the Monte Carlo EGSnrc code. The software was developed with the Microsoft Visual Studio 2010 Service Pack and the project template Windows Presentation Foundation for C# programming language. With the mentioned tools, the authors obtained the file for optimization of Monte Carlo simulations using the EGSnrc; organization and compaction of dosimetry results with all radioactive sources; selection of regions of interest; evaluation of grayscale intensity in regions of interest; the file of weighted sources; and, finally, all the charts and numerical results. The user interface may be adapted for use in clinical nuclear medicine as a computer-aided tool to estimate the administered activity.

  18. Image processing, geometric modeling and data management for development of a virtual bone surgery system.

    PubMed

    Niu, Qiang; Chi, Xiaoyi; Leu, Ming C; Ochoa, Jorge

    2008-01-01

    This paper describes image processing, geometric modeling and data management techniques for the development of a virtual bone surgery system. Image segmentation is used to divide CT scan data into different segments representing various regions of the bone. A region-growing algorithm is used to extract cortical bone and trabecular bone structures systematically and efficiently. Volume modeling is then used to represent the bone geometry based on the CT scan data. Material removal simulation is achieved by continuously performing Boolean subtraction of the surgical tool model from the bone model. A quadtree-based adaptive subdivision technique is developed to handle the large set of data in order to achieve the real-time simulation and visualization required for virtual bone surgery. A Marching Cubes algorithm is used to generate polygonal faces from the volumetric data. Rendering of the generated polygons is performed with the publicly available VTK (Visualization Tool Kit) software. Implementation of the developed techniques consists of developing a virtual bone-drilling software program, which allows the user to manipulate a virtual drill to make holes with the use of a PHANToM device on a bone model derived from real CT scan data.

  19. Evaluation of WRF Model Against Satellite and Field Measurements During ARM March 2000 IOP

    NASA Astrophysics Data System (ADS)

    Wu, J.; Zhang, M.

    2003-12-01

    Meso-scale WRF model is employed to simulate the organization of clouds related with the cyclogenesis occurred during March 1-4, 2000 over ARM SGP CART site. Qualitative comparisons of simulated clouds with GOES8 satellite images show that the WRF model can capture the main features of clouds related with the cyclogenesis. The simulated precipitation patterns also match the Radar reflectivity images well. Further evaluation of the simulated features on GCM grid-scale is conducted against ARM field measurements. The evaluation shows that the evolutions of the simulated state fields such as temperature and moisture, the simulated wind fields and the derived large-scale temperature and moisture tendencies closely follow the observed patterns. These results encourages us to use meso-scale WRF model as a tool to verify the performance of GCMs in simulating cloud feedback processes related with the frontal clouds such that we can test and validate the current cloud parameterizations in climate models, and make possible improvements to different components of current cloud parameterizations in GCMs.

  20. SESAME: a software tool for the numerical dosimetric reconstruction of radiological accidents involving external sources and its application to the accident in Chile in December 2005.

    PubMed

    Huet, C; Lemosquet, A; Clairand, I; Rioual, J B; Franck, D; de Carlan, L; Aubineau-Lanièce, I; Bottollier-Depois, J F

    2009-01-01

    Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. This dose distribution can be assessed by physical dosimetric reconstruction methods. Physical dosimetric reconstruction can be achieved using experimental or numerical techniques. This article presents the laboratory-developed SESAME--Simulation of External Source Accident with MEdical images--tool specific to dosimetric reconstruction of radiological accidents through numerical simulations which combine voxel geometry and the radiation-material interaction MCNP(X) Monte Carlo computer code. The experimental validation of the tool using a photon field and its application to a radiological accident in Chile in December 2005 are also described.

  1. Validation of a novel technique for creating simulated radiographs using computed tomography datasets.

    PubMed

    Mendoza, Patricia; d'Anjou, Marc-André; Carmel, Eric N; Fournier, Eric; Mai, Wilfried; Alexander, Kate; Winter, Matthew D; Zwingenberger, Allison L; Thrall, Donald E; Theoret, Christine

    2014-01-01

    Understanding radiographic anatomy and the effects of varying patient and radiographic tube positioning on image quality can be a challenge for students. The purposes of this study were to develop and validate a novel technique for creating simulated radiographs using computed tomography (CT) datasets. A DICOM viewer (ORS Visual) plug-in was developed with the ability to move and deform cuboidal volumetric CT datasets, and to produce images simulating the effects of tube-patient-detector distance and angulation. Computed tomographic datasets were acquired from two dogs, one cat, and one horse. Simulated radiographs of different body parts (n = 9) were produced using different angles to mimic conventional projections, before actual digital radiographs were obtained using the same projections. These studies (n = 18) were then submitted to 10 board-certified radiologists who were asked to score visualization of anatomical landmarks, depiction of patient positioning, realism of distortion/magnification, and image quality. No significant differences between simulated and actual radiographs were found for anatomic structure visualization and patient positioning in the majority of body parts. For the assessment of radiographic realism, no significant differences were found between simulated and digital radiographs for canine pelvis, equine tarsus, and feline abdomen body parts. Overall, image quality and contrast resolution of simulated radiographs were considered satisfactory. Findings from the current study indicated that radiographs simulated using this new technique are comparable to actual digital radiographs. Further studies are needed to apply this technique in developing interactive tools for teaching radiographic anatomy and the effects of varying patient and tube positioning. © 2013 American College of Veterinary Radiology.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, H; Manning, M; Sintay, B

    Purpose: Tumor motion in lung SBRT is typically managed by creating an internal target volume (ITV) based on 4D-CT information. Another option, which may reduce lung dose and imaging artifact, is to use a breath hold (BH) during simulation and delivery. Here we evaluate the reproducibility of tumor position at repeated BH using a newly released spirometry system. Methods: Three patients underwent multiple BH CT’s at simulation. All patients underwent a BH cone beam CT (CBCT) prior to each treatment. All image sets were registered to a patient’s first simulation CT based on local bony anatomy. The gross tumor volumemore » (GTV), and the diaphragm or the apex of the lung were contoured on the first image set and expanded in 1 mm increments until the GTVs and diaphragms on all image sets were included inside an expanded structure. The GTV and diaphragm margins necessary to encompass the structures were recorded. Results: The first patient underwent 2 BH CT’s and fluoroscopy at simulation, the remaining patients underwent 3 BH CT’s at simulation. In all cases the GTV’s remained within 1 mm expansions and the diaphragms remained within 2 mm expansions on repeat scans. Each patient underwent 3 daily BH CBCT’s. In all cases the GTV’s remained within a 2 mm expansions, and the diaphragms (or lung apex in one case) remained within 2 mm expansions at daily BH imaging. Conclusions: These case studies demonstrate spirometry as an effective tool for limiting tumor motion (and imaging artifact) and facilitating reproducible tumor positioning over multiple set-ups and BH’s. This work was partially supported by Qfix.« less

  3. Go With the Flow

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Under SBIR (Small Business Innovative Research) contracts with Lewis Research Center, Nektonics, Inc., developed coating process simulation tools, known as Nekton. This powerful simulation software is used specifically for the modeling and analysis of a wide range of coating flows including thin film coating analysis, polymer processing, and glass melt flows. Polaroid, Xerox, 3M, Dow Corning, Mead Paper, BASF, Mitsubishi, Chugai, and Dupont Imaging Systems are only a few of the companies that presently use Nekton.

  4. VERAView

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Ronald W.; Collins, Benjamin S.; Godfrey, Andrew T.

    2016-12-09

    In order to support engineering analysis of Virtual Environment for Reactor Analysis (VERA) model results, the Consortium for Advanced Simulation of Light Water Reactors (CASL) needs a tool that provides visualizations of HDF5 files that adhere to the VERAOUT specification. VERAView provides an interactive graphical interface for the visualization and engineering analyses of output data from VERA. The Python-based software provides instantaneous 2D and 3D images, 1D plots, and alphanumeric data from VERA multi-physics simulations.

  5. Three-dimensional ultrasound and image-directed surgery: implications for operating room personnel.

    PubMed

    Macedonia, C

    1997-04-01

    The proliferation of new imaging technologies is having a profound impact on all surgical specialties. New means of surgical visualization are allowing more surgeries to be performed less invasively. Three-dimensional ultrasound is a technology that has potential as a diagnostic tool, as a presurgical planning simulator, and as an adjunct to image-directed surgery. This article describes how three-dimensional ultrasound is being used by the United States Department of Defense and how it may change the role of the perioperative nurse in the near future.

  6. Countermeasure effectiveness against an intelligent imaging infrared anti-ship missile

    NASA Astrophysics Data System (ADS)

    Gray, Greer J.; Aouf, Nabil; Richardson, Mark; Butters, Brian; Walmsley, Roy

    2013-02-01

    Ship self defense against heat-seeking anti-ship missiles is of great concern to modern naval forces. One way of protecting ships against these threats is to use infrared (IR) offboard countermeasures. These decoys need precise placement to maximize their effectiveness, and simulation is an invaluable tool used in determining optimum deployment strategies. To perform useful simulations, high-fidelity models of missiles are required. We describe the development of an imaging IR anti-ship missile model for use in countermeasure effectiveness simulations. The missile model's tracking algorithm is based on a target recognition system that uses a neural network to discriminate between ships and decoys. The neural network is trained on shape- and intensity-based features extracted from simulated imagery. The missile model is then used within ship-decoy-missile engagement simulations, to determine how susceptible it is to the well-known walk-off seduction countermeasure technique. Finally, ship survivability is improved by adjusting the decoy model to increase its effectiveness against the tracker.

  7. What can we learn from in-soil imaging of a live plant: X-ray Computed Tomography and 3D numerical simulation of root-soil system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaofan; Varga, Tamas; Liu, Chongxuan

    Plant roots play a critical role in plant-soil-microbe interactions that occur in the rhizosphere. X-ray Computed Tomography (XCT) has been proven to be an effective tool for non-invasive root imaging and analysis. A combination of XCT, open-source software, and in-house developed code was used to non-invasively image a prairie dropseed (Sporobolus heterolepis) specimen, segment the root data to obtain a 3D image of the root structure, and extract quantitative information from the 3D data, respectively. Based on the explicitly-resolved root structure, pore-scale computational fluid dynamics (CFD) simulations were applied to numerically investigate the root-soil-groundwater system. The plant root conductivity, soilmore » hydraulic conductivity and transpiration rate were shown to control the groundwater distribution. Furthermore, the coupled imaging-modeling approach demonstrates a realistic platform to investigate rhizosphere flow processes and would be feasible to provide useful information linked to upscaled models.« less

  8. What can we learn from in-soil imaging of a live plant: X-ray Computed Tomography and 3D numerical simulation of root-soil system

    DOE PAGES

    Yang, Xiaofan; Varga, Tamas; Liu, Chongxuan; ...

    2017-05-04

    Plant roots play a critical role in plant-soil-microbe interactions that occur in the rhizosphere. X-ray Computed Tomography (XCT) has been proven to be an effective tool for non-invasive root imaging and analysis. A combination of XCT, open-source software, and in-house developed code was used to non-invasively image a prairie dropseed (Sporobolus heterolepis) specimen, segment the root data to obtain a 3D image of the root structure, and extract quantitative information from the 3D data, respectively. Based on the explicitly-resolved root structure, pore-scale computational fluid dynamics (CFD) simulations were applied to numerically investigate the root-soil-groundwater system. The plant root conductivity, soilmore » hydraulic conductivity and transpiration rate were shown to control the groundwater distribution. Furthermore, the coupled imaging-modeling approach demonstrates a realistic platform to investigate rhizosphere flow processes and would be feasible to provide useful information linked to upscaled models.« less

  9. Web-based system for surgical planning and simulation

    NASA Astrophysics Data System (ADS)

    Eldeib, Ayman M.; Ahmed, Mohamed N.; Farag, Aly A.; Sites, C. B.

    1998-10-01

    The growing scientific knowledge and rapid progress in medical imaging techniques has led to an increasing demand for better and more efficient methods of remote access to high-performance computer facilities. This paper introduces a web-based telemedicine project that provides interactive tools for surgical simulation and planning. The presented approach makes use of client-server architecture based on new internet technology where clients use an ordinary web browser to view, send, receive and manipulate patients' medical records while the server uses the supercomputer facility to generate online semi-automatic segmentation, 3D visualization, surgical simulation/planning and neuroendoscopic procedures navigation. The supercomputer (SGI ONYX 1000) is located at the Computer Vision and Image Processing Lab, University of Louisville, Kentucky. This system is under development in cooperation with the Department of Neurological Surgery, Alliant Health Systems, Louisville, Kentucky. The server is connected via a network to the Picture Archiving and Communication System at Alliant Health Systems through a DICOM standard interface that enables authorized clients to access patients' images from different medical modalities.

  10. Inter-algorithm lesion volumetry comparison of real and 3D simulated lung lesions in CT

    NASA Astrophysics Data System (ADS)

    Robins, Marthony; Solomon, Justin; Hoye, Jocelyn; Smith, Taylor; Ebner, Lukas; Samei, Ehsan

    2017-03-01

    The purpose of this study was to establish volumetric exchangeability between real and computational lung lesions in CT. We compared the overall relative volume estimation performance of segmentation tools when used to measure real lesions in actual patient CT images and computational lesions virtually inserted into the same patient images (i.e., hybrid datasets). Pathologically confirmed malignancies from 30 thoracic patient cases from Reference Image Database to Evaluate Therapy Response (RIDER) were modeled and used as the basis for the comparison. Lesions included isolated nodules as well as those attached to the pleura or other lung structures. Patient images were acquired using a 16 detector row or 64 detector row CT scanner (Lightspeed 16 or VCT; GE Healthcare). Scans were acquired using standard chest protocols during a single breath-hold. Virtual 3D lesion models based on real lesions were developed in Duke Lesion Tool (Duke University), and inserted using a validated image-domain insertion program. Nodule volumes were estimated using multiple commercial segmentation tools (iNtuition, TeraRecon, Inc., Syngo.via, Siemens Healthcare, and IntelliSpace, Philips Healthcare). Consensus based volume comparison showed consistent trends in volume measurement between real and virtual lesions across all software. The average percent bias (+/- standard error) shows -9.2+/-3.2% for real lesions versus -6.7+/-1.2% for virtual lesions with tool A, 3.9+/-2.5% and 5.0+/-0.9% for tool B, and 5.3+/-2.3% and 1.8+/-0.8% for tool C, respectively. Virtual lesion volumes were statistically similar to those of real lesions (< 4% difference) with p >.05 in most cases. Results suggest that hybrid datasets had similar inter-algorithm variability compared to real datasets.

  11. A Practical Cone-beam CT Scatter Correction Method with Optimized Monte Carlo Simulations for Image-Guided Radiation Therapy

    PubMed Central

    Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun

    2015-01-01

    Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use. PMID:25860299

  12. Noise removal using factor analysis of dynamic structures: application to cardiac gated studies.

    PubMed

    Bruyant, P P; Sau, J; Mallet, J J

    1999-10-01

    Factor analysis of dynamic structures (FADS) facilitates the extraction of relevant data, usually with physiologic meaning, from a dynamic set of images. The result of this process is a set of factor images and curves plus some residual activity. The set of factor images and curves can be used to retrieve the original data with reduced noise using an inverse factor analysis process (iFADS). This improvement in image quality is expected because the inverse process does not use the residual activity, assumed to be made of noise. The goal of this work is to quantitate and assess the efficiency of this method on gated cardiac images. A computer simulation of a planar cardiac gated study was performed. The simulated images were added with noise and processed by the FADS-iFADS program. The signal-to-noise ratios (SNRs) were compared between original and processed data. Planar gated cardiac studies from 10 patients were tested. The data processed by FADS-iFADS were subtracted to the original data. The result of the substraction was studied to evaluate its noisy nature. The SNR is about five times greater after the FADS-iFADS process. The difference between original and processed data is noise only, i.e., processed data equals original data minus some white noise. The FADS-iFADS process is successful in the removal of an important part of the noise and therefore is a tool to improve the image quality of cardiac images. This tool does not decrease the spatial resolution (compared with smoothing filters) and does not lose details (compared with frequential filters). Once the number of factors is chosen, this method is not operator dependent.

  13. Left ventricular fluid mechanics: the long way from theoretical models to clinical applications.

    PubMed

    Pedrizzetti, Gianni; Domenichini, Federico

    2015-01-01

    The flow inside the left ventricle is characterized by the formation of vortices that smoothly accompany blood from the mitral inlet to the aortic outlet. Computational fluid dynamics permitted to shed some light on the fundamental processes involved with vortex motion. More recently, patient-specific numerical simulations are becoming an increasingly feasible tool that can be integrated with the developing imaging technologies. The existing computational methods are reviewed in the perspective of their potential role as a novel aid for advanced clinical analysis. The current results obtained by simulation methods either alone or in combination with medical imaging are summarized. Open problems are highlighted and perspective clinical applications are discussed.

  14. Coded-aperture Compton camera for gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Farber, Aaron M.

    This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.

  15. Augmented reality intravenous injection simulator based 3D medical imaging for veterinary medicine.

    PubMed

    Lee, S; Lee, J; Lee, A; Park, N; Lee, S; Song, S; Seo, A; Lee, H; Kim, J-I; Eom, K

    2013-05-01

    Augmented reality (AR) is a technology which enables users to see the real world, with virtual objects superimposed upon or composited with it. AR simulators have been developed and used in human medicine, but not in veterinary medicine. The aim of this study was to develop an AR intravenous (IV) injection simulator to train veterinary and pre-veterinary students to perform canine venipuncture. Computed tomographic (CT) images of a beagle dog were scanned using a 64-channel multidetector. The CT images were transformed into volumetric data sets using an image segmentation method and were converted into a stereolithography format for creating 3D models. An AR-based interface was developed for an AR simulator for IV injection. Veterinary and pre-veterinary student volunteers were randomly assigned to an AR-trained group or a control group trained using more traditional methods (n = 20/group; n = 8 pre-veterinary students and n = 12 veterinary students in each group) and their proficiency at IV injection technique in live dogs was assessed after training was completed. Students were also asked to complete a questionnaire which was administered after using the simulator. The group that was trained using an AR simulator were more proficient at IV injection technique using real dogs than the control group (P ≤ 0.01). The students agreed that they learned the IV injection technique through the AR simulator. Although the system used in this study needs to be modified before it can be adopted for veterinary educational use, AR simulation has been shown to be a very effective tool for training medical personnel. Using the technology reported here, veterinary AR simulators could be developed for future use in veterinary education. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  16. X-ray Micro-Tomography of Ablative Heat Shield Materials

    NASA Technical Reports Server (NTRS)

    Panerai, Francesco; Ferguson, Joseph; Borner, Arnaud; Mansour, Nagi N.; Barnard, Harold S.; MacDowell, Alastair A.; Parkinson, Dilworth Y.

    2016-01-01

    X-ray micro-tomography is a non-destructive characterization technique that allows imaging of materials structures with voxel sizes in the micrometer range. This level of resolution makes the technique very attractive for imaging porous ablators used in hypersonic entry systems. Besides providing a high fidelity description of the material architecture, micro-tomography enables computations of bulk material properties and simulations of micro-scale phenomena. This presentation provides an overview of a collaborative effort between NASA Ames Research Center and Lawrence Berkeley National Laboratory, aimed at developing micro-tomography experiments and simulations for porous ablative materials. Measurements are carried using x-rays from the Advanced Light Source at Berkeley Lab on different classes of ablative materials used in NASA entry systems. Challenges, strengths and limitations of the technique for imaging materials such as lightweight carbon-phenolic systems and woven textiles are discussed. Computational tools developed to perform numerical simulations based on micro-tomography are described. These enable computations of material properties such as permeability, thermal and radiative conductivity, tortuosity and other parameters that are used in ablator response models. Finally, we present the design of environmental cells that enable imaging materials under simulated operational conditions, such as high temperature, mechanical loads and oxidizing atmospheres.Keywords: Micro-tomography, Porous media, Ablation

  17. Simulating Optical Correlation on a Digital Image Processing

    NASA Astrophysics Data System (ADS)

    Denning, Bryan

    1998-04-01

    Optical Correlation is a useful tool for recognizing objects in video scenes. In this paper, we explore the characteristics of a composite filter known as the equal correlation peak synthetic discriminant function (ECP SDF). Although the ECP SDF is commonly used in coherent optical correlation systems, the authors simulated the operation of a correlator using an EPIX frame grabber/image processor board to complete this work. Issues pertaining to simulating correlation using an EPIX board will be discussed. Additionally, the ability of the ECP SDF to detect objects that have been subjected to inplane rotation and small scale changes will be addressed by correlating filters against true-class objects placed randomly within a scene. To test the robustness of the filters, the results of correlating the filter against false-class objects that closely resemble the true class will also be presented.

  18. ThunderSTORM: a comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging

    PubMed Central

    Ovesný, Martin; Křížek, Pavel; Borkovec, Josef; Švindrych, Zdeněk; Hagen, Guy M.

    2014-01-01

    Summary: ThunderSTORM is an open-source, interactive and modular plug-in for ImageJ designed for automated processing, analysis and visualization of data acquired by single-molecule localization microscopy methods such as photo-activated localization microscopy and stochastic optical reconstruction microscopy. ThunderSTORM offers an extensive collection of processing and post-processing methods so that users can easily adapt the process of analysis to their data. ThunderSTORM also offers a set of tools for creation of simulated data and quantitative performance evaluation of localization algorithms using Monte Carlo simulations. Availability and implementation: ThunderSTORM and the online documentation are both freely accessible at https://code.google.com/p/thunder-storm/ Contact: guy.hagen@lf1.cuni.cz Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24771516

  19. Multi-scale image segmentation and numerical modeling in carbonate rocks

    NASA Astrophysics Data System (ADS)

    Alves, G. C.; Vanorio, T.

    2016-12-01

    Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.

  20. A simple prescription for simulating and characterizing gravitational arcs

    NASA Astrophysics Data System (ADS)

    Furlanetto, C.; Santiago, B. X.; Makler, M.; de Bom, C.; Brandt, C. H.; Neto, A. F.; Ferreira, P. C.; da Costa, L. N.; Maia, M. A. G.

    2013-01-01

    Simple models of gravitational arcs are crucial for simulating large samples of these objects with full control of the input parameters. These models also provide approximate and automated estimates of the shape and structure of the arcs, which are necessary for detecting and characterizing these objects on massive wide-area imaging surveys. We here present and explore the ArcEllipse, a simple prescription for creating objects with a shape similar to gravitational arcs. We also present PaintArcs, which is a code that couples this geometrical form with a brightness distribution and adds the resulting object to images. Finally, we introduce ArcFitting, which is a tool that fits ArcEllipses to images of real gravitational arcs. We validate this fitting technique using simulated arcs and apply it to CFHTLS and HST images of tangential arcs around clusters of galaxies. Our simple ArcEllipse model for the arc, associated to a Sérsic profile for the source, recovers the total signal in real images typically within 10%-30%. The ArcEllipse+Sérsic models also automatically recover visual estimates of length-to-width ratios of real arcs. Residual maps between data and model images reveal the incidence of arc substructure. They may thus be used as a diagnostic for arcs formed by the merging of multiple images. The incidence of these substructures is the main factor that prevents ArcEllipse models from accurately describing real lensed systems.

  1. Through-focus scanning optical microscopy (TSOM) with adaptive optics

    NASA Astrophysics Data System (ADS)

    Lee, Jun Ho; Park, Gyunam; Jeong, Junhee; Park, Chris

    2018-03-01

    Through-focus optical microscopy (TSOM) with nanometer-scale lateral and vertical sensitivity levels matching those of scanning electron microscopy has been demonstrated to be useful both for 3D inspections and metrology assessments. In 2014, funded by two private companies (Nextin/Samsung Electronics) and the Korea Evaluation Institute of Industrial Technology (KEIT), a research team from four universities in South Korea set out to investigate core technologies for developing in-line TSOM inspection and metrology tools, with the respective teams focusing on optics implementation, defect inspection, computer simulation and high-speed metrology matching. We initially confirmed the reported validity of the TSOM operation through a computer simulation, after which we implemented the TSOM operation by throughfocus scanning of existing UV (355nm) and IR (800nm) inspection tools. These tools have an identical sampling distance of 150 nm but have different resolving distances (310 and 810 nm, respectively). We initially experienced some improvement in the defect inspection sensitivity level over TSV (through-silicon via) samples with 6.6 μm diameters. However, during the experiment, we noted sensitivity and instability issues when attempting to acquire TSOM images. As TSOM 3D information is indirectly extracted by differentiating a target TSOM image from reference TSOM images, any instability or mismatch in imaging conditions can result in measurement errors. As a remedy to such a situation, we proposed the application of adaptive optics to the TSOM operation and developed a closed-loop system with a tip/tilt mirror and a Shack-Hartmann sensor on an optical bench. We were able to keep the plane position within in RMS 0.4 pixel by actively compensating for any position instability which arose during the TSOM scanning process along the optical axis. Currently, we are also developing another TSOM tool with a deformable mirror instead of a tip/tilt mirror, in which case we will not require any mechanical scanning.

  2. Modelling the transport of optical photons in scintillation detectors for diagnostic and radiotherapy imaging

    NASA Astrophysics Data System (ADS)

    Roncali, Emilie; Mosleh-Shirazi, Mohammad Amin; Badano, Aldo

    2017-10-01

    Computational modelling of radiation transport can enhance the understanding of the relative importance of individual processes involved in imaging systems. Modelling is a powerful tool for improving detector designs in ways that are impractical or impossible to achieve through experimental measurements. Modelling of light transport in scintillation detectors used in radiology and radiotherapy imaging that rely on the detection of visible light plays an increasingly important role in detector design. Historically, researchers have invested heavily in modelling the transport of ionizing radiation while light transport is often ignored or coarsely modelled. Due to the complexity of existing light transport simulation tools and the breadth of custom codes developed by users, light transport studies are seldom fully exploited and have not reached their full potential. This topical review aims at providing an overview of the methods employed in freely available and other described optical Monte Carlo packages and analytical models and discussing their respective advantages and limitations. In particular, applications of optical transport modelling in nuclear medicine, diagnostic and radiotherapy imaging are described. A discussion on the evolution of these modelling tools into future developments and applications is presented. The authors declare equal leadership and contribution regarding this review.

  3. VirSSPA- a virtual reality tool for surgical planning workflow.

    PubMed

    Suárez, C; Acha, B; Serrano, C; Parra, C; Gómez, T

    2009-03-01

    A virtual reality tool, called VirSSPA, was developed to optimize the planning of surgical processes. Segmentation algorithms for Computed Tomography (CT) images: a region growing procedure was used for soft tissues and a thresholding algorithm was implemented to segment bones. The algorithms operate semiautomati- cally since they only need seed selection with the mouse on each tissue segmented by the user. The novelty of the paper is the adaptation of an enhancement method based on histogram thresholding applied to CT images for surgical planning, which simplifies subsequent segmentation. A substantial improvement of the virtual reality tool VirSSPA was obtained with these algorithms. VirSSPA was used to optimize surgical planning, to decrease the time spent on surgical planning and to improve operative results. The success rate increases due to surgeons being able to see the exact extent of the patient's ailment. This tool can decrease operating room time, thus resulting in reduced costs. Virtual simulation was effective for optimizing surgical planning, which could, consequently, result in improved outcomes with reduced costs.

  4. New insights into galaxy structure from GALPHAT- I. Motivation, methodology and benchmarks for Sérsic models

    NASA Astrophysics Data System (ADS)

    Yoon, Ilsang; Weinberg, Martin D.; Katz, Neal

    2011-06-01

    We introduce a new galaxy image decomposition tool, GALPHAT (GALaxy PHotometric ATtributes), which is a front-end application of the Bayesian Inference Engine (BIE), a parallel Markov chain Monte Carlo package, to provide full posterior probability distributions and reliable confidence intervals for all model parameters. The BIE relies on GALPHAT to compute the likelihood function. GALPHAT generates scale-free cumulative image tables for the desired model family with precise error control. Interpolation of this table yields accurate pixellated images with any centre, scale and inclination angle. GALPHAT then rotates the image by position angle using a Fourier shift theorem, yielding high-speed, accurate likelihood computation. We benchmark this approach using an ensemble of simulated Sérsic model galaxies over a wide range of observational conditions: the signal-to-noise ratio S/N, the ratio of galaxy size to the point spread function (PSF) and the image size, and errors in the assumed PSF; and a range of structural parameters: the half-light radius re and the Sérsic index n. We characterize the strength of parameter covariance in the Sérsic model, which increases with S/N and n, and the results strongly motivate the need for the full posterior probability distribution in galaxy morphology analyses and later inferences. The test results for simulated galaxies successfully demonstrate that, with a careful choice of Markov chain Monte Carlo algorithms and fast model image generation, GALPHAT is a powerful analysis tool for reliably inferring morphological parameters from a large ensemble of galaxies over a wide range of different observational conditions.

  5. An intersubject variable regional anesthesia simulator with a virtual patient architecture.

    PubMed

    Ullrich, Sebastian; Grottke, Oliver; Fried, Eduard; Frommen, Thorsten; Liao, Wei; Rossaint, Rolf; Kuhlen, Torsten; Deserno, Thomas M

    2009-11-01

    The main purpose is to provide an intuitive VR-based training environment for regional anesthesia (RA). The research question is how to process subject-specific datasets, organize them in a meaningful way and how to perform the simulation for peripheral regions. We propose a flexible virtual patient architecture and methods to process datasets. Image acquisition, image processing (especially segmentation), interactive nerve modeling and permutations (nerve instantiation) are described in detail. The simulation of electric impulse stimulation and according responses are essential for the training of peripheral RA and solved by an approach based on the electric distance. We have created an XML-based virtual patient database with several subjects. Prototypes of the simulation are implemented and run on multimodal VR hardware (e.g., stereoscopic display and haptic device). A first user pilot study has confirmed our approach. The virtual patient architecture enables support for arbitrary scenarios on different subjects. This concept can also be used for other simulators. In future work, we plan to extend the simulation and conduct further evaluations in order to provide a tool for routine training for RA.

  6. Three-Dimensional Geometric Modeling of Membrane-bound Organelles in Ventricular Myocytes: Bridging the Gap between Microscopic Imaging and Mathematical Simulation

    PubMed Central

    Yu, Zeyun; Holst, Michael J.; Hayashi, Takeharu; Bajaj, Chandrajit L.; Ellisman, Mark H.; McCammon, J. Andrew; Hoshijima, Masahiko

    2009-01-01

    A general framework of image-based geometric processing is presented to bridge the gap between three-dimensional (3D) imaging that provides structural details of a biological system and mathematical simulation where high-quality surface or volumetric meshes are required. A 3D density map is processed in the order of image pre-processing (contrast enhancement and anisotropic filtering), feature extraction (boundary segmentation and skeletonization), and high-quality and realistic surface (triangular) and volumetric (tetrahedral) mesh generation. While the tool-chain described is applicable to general types of 3D imaging data, the performance is demonstrated specifically on membrane-bound organelles in ventricular myocytes that are imaged and reconstructed with electron microscopic (EM) tomography and two-photon microscopy (T-PM). Of particular interest in this study are two types of membrane-bound Ca2+-handling organelles, namely, transverse tubules (T-tubules) and junctional sarcoplasmic reticulum (jSR), both of which play an important role in regulating the excitation-contraction (E-C) coupling through dynamic Ca2+ mobilization in cardiomyocytes. PMID:18835449

  7. Three-dimensional geometric modeling of membrane-bound organelles in ventricular myocytes: bridging the gap between microscopic imaging and mathematical simulation.

    PubMed

    Yu, Zeyun; Holst, Michael J; Hayashi, Takeharu; Bajaj, Chandrajit L; Ellisman, Mark H; McCammon, J Andrew; Hoshijima, Masahiko

    2008-12-01

    A general framework of image-based geometric processing is presented to bridge the gap between three-dimensional (3D) imaging that provides structural details of a biological system and mathematical simulation where high-quality surface or volumetric meshes are required. A 3D density map is processed in the order of image pre-processing (contrast enhancement and anisotropic filtering), feature extraction (boundary segmentation and skeletonization), and high-quality and realistic surface (triangular) and volumetric (tetrahedral) mesh generation. While the tool-chain described is applicable to general types of 3D imaging data, the performance is demonstrated specifically on membrane-bound organelles in ventricular myocytes that are imaged and reconstructed with electron microscopic (EM) tomography and two-photon microscopy (T-PM). Of particular interest in this study are two types of membrane-bound Ca(2+)-handling organelles, namely, transverse tubules (T-tubules) and junctional sarcoplasmic reticulum (jSR), both of which play an important role in regulating the excitation-contraction (E-C) coupling through dynamic Ca(2+) mobilization in cardiomyocytes.

  8. Coma measurement by transmission image sensor with a PSM

    NASA Astrophysics Data System (ADS)

    Wang, Fan; Wang, Xiangzhao; Ma, Mingying; Zhang, Dongqing; Shi, Weijie; Hu, Jianming

    2005-01-01

    As feature size decreases, especially with the use of resolution enhancement technique such as off axis illumination and phase shifting mask, fast and accurate in-situ measurement of coma has become very important in improving the performance of modern lithographic tools. The measurement of coma can be achieved by the transmission image sensor, which is an aerial image measurement device. The coma can be determined by measuring the positions of the aerial image at multiple illumination settings. In the present paper, we improve the measurement accuracy of the above technique with an alternating phase shifting mask. Using the scalar diffraction theory, we analyze the effect of coma on the aerial image. To analyze the effect of the alternating phase shifting mask, we compare the pupil filling of the mark used in the above technique with that of the phase-shifted mark used in the new technique. We calculate the coma-induced image displacements of the marks at multiple partial coherence and NA settings, using the PROLITH simulation program. The simulation results show that the accuracy of coma measurement can increase approximately 20 percent using the alternating phase shifting mask.

  9. SU-E-J-100: Reconstruction of Prompt Gamma Ray Three Dimensional SPECT Image From Boron Neutron Capture Therapy(BNCT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, D; Jung, J; Suh, T

    2014-06-01

    Purpose: Purpose of paper is to confirm the feasibility of acquisition of three dimensional single photon emission computed tomography (SPECT) image from boron neutron capture therapy (BNCT) using Monte Carlo simulation. Methods: In case of simulation, the pixelated SPECT detector, collimator and phantom were simulated using Monte Carlo n particle extended (MCNPX) simulation tool. A thermal neutron source (<1 eV) was used to react with the boron uptake region (BUR) in the phantom. Each geometry had a spherical pattern, and three different BURs (A, B and C region, density: 2.08 g/cm3) were located in the middle of the brain phantom.more » The data from 128 projections for each sorting process were used to achieve image reconstruction. The ordered subset expectation maximization (OSEM) reconstruction algorithm was used to obtain a tomographic image with eight subsets and five iterations. The receiver operating characteristic (ROC) curve analysis was used to evaluate the geometric accuracy of reconstructed image. Results: The OSEM image was compared with the original phantom pattern image. The area under the curve (AUC) was calculated as the gross area under each ROC curve. The three calculated AUC values were 0.738 (A region), 0.623 (B region), and 0.817 (C region). The differences between length of centers of two boron regions and distance of maximum count points were 0.3 cm, 1.6 cm and 1.4 cm. Conclusion: The possibility of extracting a 3D BNCT SPECT image was confirmed using the Monte Carlo simulation and OSEM algorithm. The prospects for obtaining an actual BNCT SPECT image were estimated from the quality of the simulated image and the simulation conditions. When multiple tumor region should be treated using the BNCT, a reasonable model to determine how many useful images can be obtained from the SPECT could be provided to the BNCT facilities. This research was supported by the Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, Information and Communication Technologies (ICT) and Future Planning (MSIP)(Grant No.200900420) and the Radiation Technology Research and Development program (Grant No.2013043498), Republic of Korea.« less

  10. Simulation study of reticle enhancement technology applications for 157-nm lithography

    NASA Astrophysics Data System (ADS)

    Schurz, Dan L.; Flack, Warren W.; Karklin, Linard

    2002-03-01

    The acceleration of the International Technology Roadmap for Semiconductors (ITRS) is placing significant pressure on the industry's infrastructure, particularly the lithography equipment. As recently as 1997, there was no optical solution offered past the 130 nm design node. The current roadmap has the 65 nm node (reduced from 70 nm) pulled in one year to 2007. Both 248 nm and 193 nm wavelength lithography tools will be pushed to their practical resolution limits in the near term. Very high numerical aperture (NA) 193 nm exposure tools in conjunction with resolution enhancement techniques (RET) will postpone the requirement for 157 nm lithography in manufacturing. However, ICs produced at 70 nm design rules with manufacturable k 1 values will require that 157 nm wavelength lithography tools incorporate the same RETs utilized in 248nm, and 193 nm tools. These enhancements will include Alternating Phase Shifting Masks (AltPSM) and Optical Proximity Correction (OPC) on F 2 doped quartz reticle substrates. This study investigates simulation results when AltPSM is applied to sub-100 nm test patterns in 157 nm lithography in order to maintain Critical Dimension (CD) control for both nested and isolated geometries. Aerial image simulations are performed for a range of numerical apertures, chrome regulators, gate pitches and gate widths. The relative performance for phase shifted versus binary structures is also compared. Results are demonstrated in terms of aerial image contrast and process window changes. The results clearly show that a combination of high NA and RET is necessary to achieve usable process windows for 70 nm line/space structures. In addition, it is important to consider two-dimensional proximity effects for sub-100 nm gate structures.

  11. Aortic dissection simulation models for clinical support: fluid-structure interaction vs. rigid wall models.

    PubMed

    Alimohammadi, Mona; Sherwood, Joseph M; Karimpour, Morad; Agu, Obiekezie; Balabani, Stavroula; Díaz-Zuccarini, Vanessa

    2015-04-15

    The management and prognosis of aortic dissection (AD) is often challenging and the use of personalised computational models is being explored as a tool to improve clinical outcome. Including vessel wall motion in such simulations can provide more realistic and potentially accurate results, but requires significant additional computational resources, as well as expertise. With clinical translation as the final aim, trade-offs between complexity, speed and accuracy are inevitable. The present study explores whether modelling wall motion is worth the additional expense in the case of AD, by carrying out fluid-structure interaction (FSI) simulations based on a sample patient case. Patient-specific anatomical details were extracted from computed tomography images to provide the fluid domain, from which the vessel wall was extrapolated. Two-way fluid-structure interaction simulations were performed, with coupled Windkessel boundary conditions and hyperelastic wall properties. The blood was modelled using the Carreau-Yasuda viscosity model and turbulence was accounted for via a shear stress transport model. A simulation without wall motion (rigid wall) was carried out for comparison purposes. The displacement of the vessel wall was comparable to reports from imaging studies in terms of intimal flap motion and contraction of the true lumen. Analysis of the haemodynamics around the proximal and distal false lumen in the FSI model showed complex flow structures caused by the expansion and contraction of the vessel wall. These flow patterns led to significantly different predictions of wall shear stress, particularly its oscillatory component, which were not captured by the rigid wall model. Through comparison with imaging data, the results of the present study indicate that the fluid-structure interaction methodology employed herein is appropriate for simulations of aortic dissection. Regions of high wall shear stress were not significantly altered by the wall motion, however, certain collocated regions of low and oscillatory wall shear stress which may be critical for disease progression were only identified in the FSI simulation. We conclude that, if patient-tailored simulations of aortic dissection are to be used as an interventional planning tool, then the additional complexity, expertise and computational expense required to model wall motion is indeed justified.

  12. Opportunity on 'Cabo Frio' (Simulated)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This image superimposes an artist's concept of the Mars Exploration Rover Opportunity atop the 'Cabo Frio' promontory on the rim of 'Victoria Crater' in the Meridiani Planum region of Mars. It is done to give a sense of scale. The underlying image was taken by Opportunity's panoramic camera during the rover's 952nd Martian day, or sol (Sept. 28, 2006).

    This synthetic image of NASA's Opportunity Mars Exploration Rover at Victoria Crater was produced using 'Virtual Presence in Space' technology. Developed at NASA's Jet Propulsion Laboratory, Pasadena, Calif., this technology combines visualization and image processing tools with Hollywood-style special effects. The image was created using a photorealistic model of the rover and an approximately full-color mosaic.

  13. Dataflow Integration and Simulation Techniques for DSP System Design Tools

    DTIC Science & Technology

    2007-01-01

    Lebak, M. Richards , and D. Campbell, “VSIPL: An object-based open standard API for vector, signal, and image processing,” in Proceedings of the...Inc., document Version 0.98a. [56] P. Marwedel and G. Goossens , Eds., Code Generation for Embedded Processors. Kluwer Academic Publishers, 1995. [57

  14. Web-Based Learning and Instruction Support System for Pneumatics

    ERIC Educational Resources Information Center

    Yen, Chiaming; Li, Wu-Jeng

    2003-01-01

    This research presents a Web-based learning and instructional system for Pneumatics. The system includes course material, remote data acquisition modules, and a pneumatic laboratory set. The course material is in the HTML format accompanied with text, still and animated images, simulation programs, and computer aided design tools. The data…

  15. Homogeneous Canine Chest Phantom Construction: A Tool for Image Quality Optimization.

    PubMed

    Pavan, Ana Luiza Menegatti; Rosa, Maria Eugênia Dela; Giacomini, Guilherme; Bacchim Neto, Fernando Antonio; Yamashita, Seizo; Vulcano, Luiz Carlos; Duarte, Sergio Barbosa; Miranda, José Ricardo de Arruda; de Pina, Diana Rodrigues

    2016-01-01

    Digital radiographic imaging is increasing in veterinary practice. The use of radiation demands responsibility to maintain high image quality. Low doses are necessary because workers are requested to restrain the animal. Optimizing digital systems is necessary to avoid unnecessary exposure, causing the phenomenon known as dose creep. Homogeneous phantoms are widely used to optimize image quality and dose. We developed an automatic computational methodology to classify and quantify tissues (i.e., lung tissue, adipose tissue, muscle tissue, and bone) in canine chest computed tomography exams. The thickness of each tissue was converted to simulator materials (i.e., Lucite, aluminum, and air). Dogs were separated into groups of 20 animals each according to weight. Mean weights were 6.5 ± 2.0 kg, 15.0 ± 5.0 kg, 32.0 ± 5.5 kg, and 50.0 ± 12.0 kg, for the small, medium, large, and giant groups, respectively. The one-way analysis of variance revealed significant differences in all simulator material thicknesses (p < 0.05) quantified between groups. As a result, four phantoms were constructed for dorsoventral and lateral views. In conclusion, the present methodology allows the development of phantoms of the canine chest and possibly other body regions and/or animals. The proposed phantom is a practical tool that may be employed in future work to optimize veterinary X-ray procedures.

  16. Forward scattering effects on muon imaging

    NASA Astrophysics Data System (ADS)

    Gómez, H.; Gibert, D.; Goy, C.; Jourde, K.; Karyotakis, Y.; Katsanevas, S.; Marteau, J.; Rosas-Carbajal, M.; Tonazzo, A.

    2017-12-01

    Muon imaging is one of the most promising non-invasive techniques for density structure scanning, specially for large objects reaching the kilometre scale. It has already interesting applications in different fields like geophysics or nuclear safety and has been proposed for some others like engineering or archaeology. One of the approaches of this technique is based on the well-known radiography principle, by reconstructing the incident direction of the detected muons after crossing the studied objects. In this case, muons detected after a previous forward scattering on the object surface represent an irreducible background noise, leading to a bias on the measurement and consequently on the reconstruction of the object mean density. Therefore, a prior characterization of this effect represents valuable information to conveniently correct the obtained results. Although the muon scattering process has been already theoretically described, a general study of this process has been carried out based on Monte Carlo simulations, resulting in a versatile tool to evaluate this effect for different object geometries and compositions. As an example, these simulations have been used to evaluate the impact of forward scattered muons on two different applications of muon imaging: archaeology and volcanology, revealing a significant impact on the latter case. The general way in which all the tools used have been developed can allow to make equivalent studies in the future for other muon imaging applications following the same procedure.

  17. Homogeneous Canine Chest Phantom Construction: A Tool for Image Quality Optimization

    PubMed Central

    2016-01-01

    Digital radiographic imaging is increasing in veterinary practice. The use of radiation demands responsibility to maintain high image quality. Low doses are necessary because workers are requested to restrain the animal. Optimizing digital systems is necessary to avoid unnecessary exposure, causing the phenomenon known as dose creep. Homogeneous phantoms are widely used to optimize image quality and dose. We developed an automatic computational methodology to classify and quantify tissues (i.e., lung tissue, adipose tissue, muscle tissue, and bone) in canine chest computed tomography exams. The thickness of each tissue was converted to simulator materials (i.e., Lucite, aluminum, and air). Dogs were separated into groups of 20 animals each according to weight. Mean weights were 6.5 ± 2.0 kg, 15.0 ± 5.0 kg, 32.0 ± 5.5 kg, and 50.0 ± 12.0 kg, for the small, medium, large, and giant groups, respectively. The one-way analysis of variance revealed significant differences in all simulator material thicknesses (p < 0.05) quantified between groups. As a result, four phantoms were constructed for dorsoventral and lateral views. In conclusion, the present methodology allows the development of phantoms of the canine chest and possibly other body regions and/or animals. The proposed phantom is a practical tool that may be employed in future work to optimize veterinary X-ray procedures. PMID:27101001

  18. Simulation of seagrass bed mapping by satellite images based on the radiative transfer model

    NASA Astrophysics Data System (ADS)

    Sagawa, Tatsuyuki; Komatsu, Teruhisa

    2015-06-01

    Seagrass and seaweed beds play important roles in coastal marine ecosystems. They are food sources and habitats for many marine organisms, and influence the physical, chemical, and biological environment. They are sensitive to human impacts such as reclamation and pollution. Therefore, their management and preservation are necessary for a healthy coastal environment. Satellite remote sensing is a useful tool for mapping and monitoring seagrass beds. The efficiency of seagrass mapping, seagrass bed classification in particular, has been evaluated by mapping accuracy using an error matrix. However, mapping accuracies are influenced by coastal environments such as seawater transparency, bathymetry, and substrate type. Coastal management requires sufficient accuracy and an understanding of mapping limitations for monitoring coastal habitats including seagrass beds. Previous studies are mainly based on case studies in specific regions and seasons. Extensive data are required to generalise assessments of classification accuracy from case studies, which has proven difficult. This study aims to build a simulator based on a radiative transfer model to produce modelled satellite images and assess the visual detectability of seagrass beds under different transparencies and seagrass coverages, as well as to examine mapping limitations and classification accuracy. Our simulations led to the development of a model of water transparency and the mapping of depth limits and indicated the possibility for seagrass density mapping under certain ideal conditions. The results show that modelling satellite images is useful in evaluating the accuracy of classification and that establishing seagrass bed monitoring by remote sensing is a reliable tool.

  19. ILT based defect simulation of inspection images accurately predicts mask defect printability on wafer

    NASA Astrophysics Data System (ADS)

    Deep, Prakash; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2016-05-01

    At advanced technology nodes mask complexity has been increased because of large-scale use of resolution enhancement technologies (RET) which includes Optical Proximity Correction (OPC), Inverse Lithography Technology (ILT) and Source Mask Optimization (SMO). The number of defects detected during inspection of such mask increased drastically and differentiation of critical and non-critical defects are more challenging, complex and time consuming. Because of significant defectivity of EUVL masks and non-availability of actinic inspection, it is important and also challenging to predict the criticality of defects for printability on wafer. This is one of the significant barriers for the adoption of EUVL for semiconductor manufacturing. Techniques to decide criticality of defects from images captured using non actinic inspection images is desired till actinic inspection is not available. High resolution inspection of photomask images detects many defects which are used for process and mask qualification. Repairing all defects is not practical and probably not required, however it's imperative to know which defects are severe enough to impact wafer before repair. Additionally, wafer printability check is always desired after repairing a defect. AIMSTM review is the industry standard for this, however doing AIMSTM review for all defects is expensive and very time consuming. Fast, accurate and an economical mechanism is desired which can predict defect printability on wafer accurately and quickly from images captured using high resolution inspection machine. Predicting defect printability from such images is challenging due to the fact that the high resolution images do not correlate with actual mask contours. The challenge is increased due to use of different optical condition during inspection other than actual scanner condition, and defects found in such images do not have correlation with actual impact on wafer. Our automated defect simulation tool predicts printability of defects at wafer level and automates the process of defect dispositioning from images captured using high resolution inspection machine. It first eliminates false defects due to registration, focus errors, image capture errors and random noise caused during inspection. For the remaining real defects, actual mask-like contours are generated using the Calibre® ILT solution [1][2], which is enhanced to predict the actual mask contours from high resolution defect images. It enables accurate prediction of defect contours, which is not possible from images captured using inspection machine because some information is already lost due to optical effects. Calibre's simulation engine is used to generate images at wafer level using scanner optical conditions and mask-like contours as input. The tool then analyses simulated images and predicts defect printability. It automatically calculates maximum CD variation and decides which defects are severe to affect patterns on wafer. In this paper, we assess the printability of defects for the mask of advanced technology nodes. In particular, we will compare the recovered mask contours with contours extracted from SEM image of the mask and compare simulation results with AIMSTM for a variety of defects and patterns. The results of printability assessment and the accuracy of comparison are presented in this paper. We also suggest how this method can be extended to predict printability of defects identified on EUV photomasks.

  20. Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths.

    PubMed

    Ingaramo, Maria; York, Andrew G; Hoogendoorn, Eelco; Postma, Marten; Shroff, Hari; Patterson, George H

    2014-03-17

    We use Richardson-Lucy (RL) deconvolution to combine multiple images of a simulated object into a single image in the context of modern fluorescence microscopy techniques. RL deconvolution can merge images with very different point-spread functions, such as in multiview light-sheet microscopes,1, 2 while preserving the best resolution information present in each image. We show that RL deconvolution is also easily applied to merge high-resolution, high-noise images with low-resolution, low-noise images, relevant when complementing conventional microscopy with localization microscopy. We also use RL deconvolution to merge images produced by different simulated illumination patterns, relevant to structured illumination microscopy (SIM)3, 4 and image scanning microscopy (ISM). The quality of our ISM reconstructions is at least as good as reconstructions using standard inversion algorithms for ISM data, but our method follows a simpler recipe that requires no mathematical insight. Finally, we apply RL deconvolution to merge a series of ten images with varying signal and resolution levels. This combination is relevant to gated stimulated-emission depletion (STED) microscopy, and shows that merges of high-quality images are possible even in cases for which a non-iterative inversion algorithm is unknown. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Design and simulation of EVA tools for first servicing mission of HST

    NASA Technical Reports Server (NTRS)

    Naik, Dipak; Dehoff, P. H.

    1993-01-01

    The Hubble Space Telescope (HST) was launched into near-earth orbit by the space shuttle Discovery on April 24, 1990. The payload of two cameras, two spectrographs, and a high-speed photometer is supplemented by three fine-guidance sensors that can be used for astronomy as well as for star tracking. A widely reported spherical aberration in the primary mirror causes HST to produce images of much lower quality than intended. A space shuttle repair mission in late 1993 will install small corrective mirrors that will restore the full intended optical capability of the HST. The first servicing mission (FSM) will involve considerable extravehicular activity (EVA). It is proposed to design special EVA tools for the FSM. This report includes details of the data acquisition system being developed to test the performance of the various EVA tools in ambient as well as simulated space environment.

  2. ISSARS Aerosol Database : an Incorporation of Atmospheric Particles into a Universal Tool to Simulate Remote Sensing Instruments

    NASA Technical Reports Server (NTRS)

    Goetz, Michael B.

    2011-01-01

    The Instrument Simulator Suite for Atmospheric Remote Sensing (ISSARS) entered its third and final year of development with an overall goal of providing a unified tool to simulate active and passive space borne atmospheric remote sensing instruments. These simulations focus on the atmosphere ranging from UV to microwaves. ISSARS handles all assumptions and uses various models on scattering and microphysics to fill the gaps left unspecified by the atmospheric models to create each instrument's measurements. This will help benefit mission design and reduce mission cost, create efficient implementation of multi-instrument/platform Observing System Simulation Experiments (OSSE), and improve existing models as well as new advanced models in development. In this effort, various aerosol particles are incorporated into the system, and a simulation of input wavelength and spectral refractive indices related to each spherical test particle(s) generate its scattering properties and phase functions. These atmospheric particles being integrated into the system comprise the ones observed by the Multi-angle Imaging SpectroRadiometer(MISR) and by the Multiangle SpectroPolarimetric Imager(MSPI). In addition, a complex scattering database generated by Prof. Ping Yang (Texas A&M) is also incorporated into this aerosol database. Future development with a radiative transfer code will generate a series of results that can be validated with results obtained by the MISR and MSPI instruments; nevertheless, test cases are simulated to determine the validity of various plugin libraries used to determine or gather the scattering properties of particles studied by MISR and MSPI, or within the Single-scattering properties of tri-axial ellipsoidal mineral dust particles database created by Prof. Ping Yang.

  3. Next-generation Event Horizon Telescope developments: new stations for enhanced imaging

    NASA Astrophysics Data System (ADS)

    Palumbo, Daniel; Johnson, Michael; Doeleman, Sheperd; Chael, Andrew; Bouman, Katherine

    2018-01-01

    The Event Horizon Telescope (EHT) is a multinational Very Long Baseline Interferometry (VLBI) network of dishes joined to resolve general relativistic behavior near a supermassive black hole. The imaging quality of the EHT is largely dependent upon the sensitivity and spatial frequency coverage of the many baselines between its constituent telescopes. The EHT already contains many highly sensitive dishes, including the crucial Atacama Large Millimeter/Submillimeter Array (ALMA), making it viable to add smaller, cheaper telescopes to the array, greatly improving future capabilities of the EHT. We develop tools for optimizing the positions of new dishes in planned arrays. We also explore the feasibility of adding small orbiting dishes to the EHT, and develop orbital optimization tools for space-based VLBI imaging. Unlike the Millimetron mission planned to be at L2, we specifically treat near-earth orbiters, and find rapid filling of spatial frequency coverage across a large range of baseline lengths. Finally, we demonstrate significant improvement in image quality when adding small dishes to planned arrays in simulated observations.

  4. CONRAD—A software framework for cone-beam imaging in radiology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maier, Andreas; Choi, Jang-Hwan; Riess, Christian

    2013-11-15

    Purpose: In the community of x-ray imaging, there is a multitude of tools and applications that are used in scientific practice. Many of these tools are proprietary and can only be used within a certain lab. Often the same algorithm is implemented multiple times by different groups in order to enable comparison. In an effort to tackle this problem, the authors created CONRAD, a software framework that provides many of the tools that are required to simulate basic processes in x-ray imaging and perform image reconstruction with consideration of nonlinear physical effects.Methods: CONRAD is a Java-based state-of-the-art software platform withmore » extensive documentation. It is based on platform-independent technologies. Special libraries offer access to hardware acceleration such as OpenCL. There is an easy-to-use interface for parallel processing. The software package includes different simulation tools that are able to generate up to 4D projection and volume data and respective vector motion fields. Well known reconstruction algorithms such as FBP, DBP, and ART are included. All algorithms in the package are referenced to a scientific source.Results: A total of 13 different phantoms and 30 processing steps have already been integrated into the platform at the time of writing. The platform comprises 74.000 nonblank lines of code out of which 19% are used for documentation. The software package is available for download at http://conrad.stanford.edu. To demonstrate the use of the package, the authors reconstructed images from two different scanners, a table top system and a clinical C-arm system. Runtimes were evaluated using the RabbitCT platform and demonstrate state-of-the-art runtimes with 2.5 s for the 256 problem size and 12.4 s for the 512 problem size.Conclusions: As a common software framework, CONRAD enables the medical physics community to share algorithms and develop new ideas. In particular this offers new opportunities for scientific collaboration and quantitative performance comparison between the methods of different groups.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    St James, S; Argento, D; DeWitt, D

    Purpose: Fast neutron therapy is offered at the University of Washington Medical Center for treatment of selected cancers. The hardware and control systems of the UW Clinical Neutron Therapy System are undergoing upgrades to enable delivery of IMNT. To clinically implement IMNT, dose verification tools need to be developed. We propose a portal imaging system that relies on the creation of positron emitting isotopes ({sup 11}C and {sup 15}O) through (n, 2n) reactions with a PMMA plate placed below the patient. After field delivery, the plate is retrieved from the vault and imaged using a reader that detects the annihilationmore » photons. The pattern of activity produced in the plate provides information to reconstruct the neutron fluence map that can be compared to fluence maps from Monte Carlo (MCNP) simulations to verify treatment delivery. We have previously performed Monte Carlo simulations of the portal imaging system (GATE simulations) and the beam line (MCNP simulations). In this work, initial measurements using a prototype system are presented. Methods: Custom electronics were developed for BGO detectors read out with photomultiplier tubes (previous generation PET detectors from a CTI ECAT 953 scanner). Two detectors were placed in coincidence, with a detector separation of 2 cm. Custom software was developed to create the crystal look up tables and perform a limited angle planar reconstruction with a stochastic normalization. To test the initial capabilities of the system, PMMA squares were irradiated with neutrons at a depth of 1.5 cm and read out using the prototype system. Doses ranging from 10–200 cGy were delivered. Results: Using the prototype system, dose differences in the therapeutic range could be determined. Conclusion: The prototype portal imaging system is capable of detecting neutron doses as low as 10–50 cGy and shows great promise as a patient QA tool for IMNT.« less

  6. Proximity matching for ArF and KrF scanners

    NASA Astrophysics Data System (ADS)

    Kim, Young Ki; Pohling, Lua; Hwee, Ng Teng; Kim, Jeong Soo; Benyon, Peter; Depre, Jerome; Hong, Jongkyun; Serebriakov, Alexander

    2009-03-01

    There are many IC-manufacturers over the world that use various exposure systems and work with very high requirements in order to establish and maintain stable lithographic processes of 65 nm, 45 nm and below. Once the process is established, manufacturer desires to be able to run it on different tools that are available. This is why the proximity matching plays a key role to maximize tools utilization in terms of productivity for different types of exposure tools. In this paper, we investigate the source of errors that cause optical proximity mismatch and evaluate several approaches for proximity matching of different types of 193 nm and 248 nm scanner systems such as set-get sigma calibration, contrast adjustment, and, finally, tuning imaging parameters by optimization with Manual Scanner Matcher. First, to monitor the proximity mismatch, we collect CD measurement data for the reference tool and for the tool-to-be-matched. Normally, the measurement is performed for a set of line or space through pitch structures. Secondly, by simulation or experiment, we determine the sensitivity of the critical structures with respect to small adjustment of exposure settings such as NA, sigma inner, sigma outer, dose, focus scan range etc. that are called 'proximity tuning knobs'. Then, with the help of special optimization software, we compute the proximity knob adjustment that has to be applied to the tool-to-be-matched to match the reference tool. Finally, we verify successful matching by exposing on the tool-to-be-matched with tuned exposure settings. This procedure is applicable for inter- and intra scanner type matching, but possibly also for process transfers to the design targets. In order to illustrate the approach we show experimental data as well as results of imaging simulations. The set demonstrate successful matching of critical structures for ArF scanners of different tool generations.

  7. Poster — Thur Eve — 11: Validation of the orthopedic metallic artifact reduction tool for CT simulations at the Ottawa Hospital Cancer Centre

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sutherland, J; Foottit, C

    Metallic implants in patients can produce image artifacts in kilovoltage CT simulation images which can introduce noise and inaccuracies in CT number, affecting anatomical segmentation and dose distributions. The commercial orthopedic metal artifact reduction algorithm (O-MAR) (Philips Healthcare System) was recently made available on CT simulation scanners at our institution. This study validated the clinical use of O-MAR by investigating its effects on CT number and dose distributions. O-MAR corrected and uncorrected images were acquired with a Philips Brilliance Big Bore CT simulator of a cylindrical solid water phantom that contained various plugs (including metal) of known density. CT numbermore » accuracy was investigated by determining the mean and standard deviation in regions of interest (ROI) within each plug for uncorrected and O-MAR corrected images and comparing with no-metal image values. Dose distributions were calculated using the Monaco treatment planning system. Seven open fields were equally spaced about the phantom around a ROI near the center of the phantom. These were compared to a “correct” dose distribution calculated by overriding electron densities a no-metal phantom image to produce an image containing metal but no artifacts. An overall improvement in CT number and dose distribution accuracy was achieved by applying the O-MAR correction. Mean CT numbers and standard deviations were found to be generally improved. Exceptions included lung equivalent media, which is consistent with vendor specified contraindications. Dose profiles were found to vary by ±4% between uncorrected or O-MAR corrected images with O-MAR producing doses closer to ground truth.« less

  8. Comparison of binary mask defect printability analysis using virtual stepper system and aerial image microscope system

    NASA Astrophysics Data System (ADS)

    Phan, Khoi A.; Spence, Chris A.; Dakshina-Murthy, S.; Bala, Vidya; Williams, Alvina M.; Strener, Steve; Eandi, Richard D.; Li, Junling; Karklin, Linard

    1999-12-01

    As advanced process technologies in the wafer fabs push the patterning processes toward lower k1 factor for sub-wavelength resolution printing, reticles are required to use optical proximity correction (OPC) and phase-shifted mask (PSM) for resolution enhancement. For OPC/PSM mask technology, defect printability is one of the major concerns. Current reticle inspection tools available on the market sometimes are not capable of consistently differentiating between an OPC feature and a true random defect. Due to the process complexity and high cost associated with the making of OPC/PSM reticles, it is important for both mask shops and lithography engineers to understand the impact of different defect types and sizes to the printability. Aerial Image Measurement System (AIMS) has been used in the mask shops for a number of years for reticle applications such as aerial image simulation and transmission measurement of repaired defects. The Virtual Stepper System (VSS) provides an alternative method to do defect printability simulation and analysis using reticle images captured by an optical inspection or review system. In this paper, pre- programmed defects and repairs from a Defect Sensitivity Monitor (DSM) reticle with 200 nm minimum features (at 1x) will be studied for printability. The simulated resist lines by AIMS and VSS are both compared to SEM images of resist wafers qualitatively and quantitatively using CD verification.Process window comparison between unrepaired and repaired defects for both good and bad repair cases will be shown. The effect of mask repairs to resist pattern images for the binary mask case will be discussed. AIMS simulation was done at the International Sematech, Virtual stepper simulation at Zygo and resist wafers were processed at AMD-Submicron Development Center using a DUV lithographic process for 0.18 micrometer Logic process technology.

  9. Software systems for modeling articulated figures

    NASA Technical Reports Server (NTRS)

    Phillips, Cary B.

    1989-01-01

    Research in computer animation and simulation of human task performance requires sophisticated geometric modeling and user interface tools. The software for a research environment should present the programmer with a powerful but flexible substrate of facilities for displaying and manipulating geometric objects, yet insure that future tools have a consistent and friendly user interface. Jack is a system which provides a flexible and extensible programmer and user interface for displaying and manipulating complex geometric figures, particularly human figures in a 3D working environment. It is a basic software framework for high-performance Silicon Graphics IRIS workstations for modeling and manipulating geometric objects in a general but powerful way. It provides a consistent and user-friendly interface across various applications in computer animation and simulation of human task performance. Currently, Jack provides input and control for applications including lighting specification and image rendering, anthropometric modeling, figure positioning, inverse kinematics, dynamic simulation, and keyframe animation.

  10. Validation of the SimSET simulation package for modeling the Siemens Biograph mCT PET scanner

    NASA Astrophysics Data System (ADS)

    Poon, Jonathan K.; Dahlbom, Magnus L.; Casey, Michael E.; Qi, Jinyi; Cherry, Simon R.; Badawi, Ramsey D.

    2015-02-01

    Monte Carlo simulation provides a valuable tool in performance assessment and optimization of system design parameters for PET scanners. SimSET is a popular Monte Carlo simulation toolkit that features fast simulation time, as well as variance reduction tools to further enhance computational efficiency. However, SimSET has lacked the ability to simulate block detectors until its most recent release. Our goal is to validate new features of SimSET by developing a simulation model of the Siemens Biograph mCT PET scanner and comparing the results to a simulation model developed in the GATE simulation suite and to experimental results. We used the NEMA NU-2 2007 scatter fraction, count rates, and spatial resolution protocols to validate the SimSET simulation model and its new features. The SimSET model overestimated the experimental results of the count rate tests by 11-23% and the spatial resolution test by 13-28%, which is comparable to previous validation studies of other PET scanners in the literature. The difference between the SimSET and GATE simulation was approximately 4-8% for the count rate test and approximately 3-11% for the spatial resolution test. In terms of computational time, SimSET performed simulations approximately 11 times faster than GATE simulations. The new block detector model in SimSET offers a fast and reasonably accurate simulation toolkit for PET imaging applications.

  11. A System Trade Study of Remote Infrared Imaging for Space Shuttle Reentry

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Ross, Martin N.; Baize, Rosemary; Horvath, Thomas J.; Berry, Scott A.; Krasa, Paul W.

    2008-01-01

    A trade study reviewing the primary operational parameters concerning the deployment of imaging assets in support of the Hypersonic Thermodynamic Infrared Measurements (HYTHIRM) project was undertaken. The objective was to determine key variables and constraints for obtaining thermal images of the Space Shuttle orbiter during reentry. The trade study investigated the performance characteristics and operating environment of optical instrumentation that may be deployed during a HYTHIRM data collection mission, and specified contributions to the Point Spread Function. It also investigated the constraints that have to be considered in order to optimize deployment through the use of mission planning tools. These tools simulate the radiance modeling of the vehicle as well as the expected spatial resolution based on the Orbiter trajectory and placement of land based or airborne optical sensors for given Mach numbers. Lastly, this report focused on the tools and methodology that have to be in place for real-time mission planning in order to handle the myriad of variables such as trajectory ground track, weather, and instrumentation availability that may only be known in the hours prior to landing.

  12. Development of a personalized training system using the Lung Image Database Consortium and Image Database resource Initiative Database.

    PubMed

    Lin, Hongli; Wang, Weisheng; Luo, Jiawei; Yang, Xuedong

    2014-12-01

    The aim of this study was to develop a personalized training system using the Lung Image Database Consortium (LIDC) and Image Database resource Initiative (IDRI) Database, because collecting, annotating, and marking a large number of appropriate computed tomography (CT) scans, and providing the capability of dynamically selecting suitable training cases based on the performance levels of trainees and the characteristics of cases are critical for developing a efficient training system. A novel approach is proposed to develop a personalized radiology training system for the interpretation of lung nodules in CT scans using the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which provides a Content-Boosted Collaborative Filtering (CBCF) algorithm for predicting the difficulty level of each case of each trainee when selecting suitable cases to meet individual needs, and a diagnostic simulation tool to enable trainees to analyze and diagnose lung nodules with the help of an image processing tool and a nodule retrieval tool. Preliminary evaluation of the system shows that developing a personalized training system for interpretation of lung nodules is needed and useful to enhance the professional skills of trainees. The approach of developing personalized training systems using the LIDC/IDRL database is a feasible solution to the challenges of constructing specific training program in terms of cost and training efficiency. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  13. A Head and Neck Simulator for Radiology and Radiotherapy

    NASA Astrophysics Data System (ADS)

    Thompson, Larissa; Campos, Tarcísio P. R.

    2013-06-01

    Phantoms are suitable tools to simulate body tissues and organs in radiology and radiation therapy. This study presents the development of a physical head and neck phantom and its radiological response for simulating brain pathology. The following features on the phantom are addressed and compared to human data: mass density, chemical composition, anatomical shape, computerized tomography images and Hounsfield Units. Mass attenuation and kerma coefficients of the synthetic phantom and normal tissues, as well as their deviations, were also investigated. Radiological experiments were performed, including brain tumors and subarachnoid hemorrhage simulations. Computerized tomography images of such pathologies in phantom and human were obtained. The anthropometric dimensions of the phantom present anatomical conformation similar to a human head and neck. Elemental weight percentages of the equivalent tissues match the human ones. Hounsfield Unit values of the main developed structures are presented, approaching human data. Kerma and mass attenuation coefficients spectra from human and phantom are presented, demonstrating smaller deviations in the radiological X-ray spectral domain. In conclusion, the phantom presented suitable normal and pathological radiological responses relative to those observed in humans. It may improve radiological protocols and education in medical imaging.

  14. WUVS simulator: detectability of spectral lines with the WSO-UV spectrographs

    NASA Astrophysics Data System (ADS)

    Marcos-Arenal, Pablo; de Castro, Ana I. Gómez; Abarca, Belén Perea; Sachkov, Mikhail

    2017-04-01

    The World Space Observatory Ultraviolet telescope is equipped with high dispersion (55,000) spectrographs working in the 1150 to 3100 Å spectral range. To evaluate the impact of the design on the scientific objectives of the mission, a simulation software tool has been developed. This simulator builds on the development made for the PLATO space mission and it is designed to generate synthetic time-series of images by including models of all important noise sources. We describe its design and performance. Moreover, its application to the detectability of important spectral features for star formation and exoplanetary research is addressed.

  15. 3-d brownian motion simulator for high-sensitivity nanobiotechnological applications.

    PubMed

    Toth, Arpád; Banky, Dániel; Grolmusz, Vince

    2011-12-01

    A wide variety of nanobiotechnologic applications are being developed for nanoparticle based in vitro diagnostic and imaging systems. Some of these systems make possible highly sensitive detection of molecular biomarkers. Frequently, the very low concentration of the biomarkers makes impossible the classical, partial differential equation-based mathematical simulation of the motion of the nanoparticles involved. We present a three-dimensional Brownian motion simulation tool for the prediction of the movement of nanoparticles in various thermal, viscosity, and geometric settings in a rectangular cuvette. For nonprofit users the server is freely available at the site http://brownian.pitgroup.org.

  16. CAMEO-SIM: a physics-based broadband scene simulation tool for assessment of camouflage, concealment, and deception methodologies

    NASA Astrophysics Data System (ADS)

    Moorhead, Ian R.; Gilmore, Marilyn A.; Houlbrook, Alexander W.; Oxford, David E.; Filbee, David R.; Stroud, Colin A.; Hutchings, G.; Kirk, Albert

    2001-09-01

    Assessment of camouflage, concealment, and deception (CCD) methodologies is not a trivial problem; conventionally the only method has been to carry out field trials, which are both expensive and subject to the vagaries of the weather. In recent years computing power has increased, such that there are now many research programs using synthetic environments for CCD assessments. Such an approach is attractive; the user has complete control over the environmental parameters and many more scenarios can be investigated. The UK Ministry of Defence is currently developing a synthetic scene generation tool for assessing the effectiveness of air vehicle camouflage schemes. The software is sufficiently flexible to allow it to be used in a broader range of applications, including full CCD assessment. The synthetic scene simulation system (CAMEO- SIM) has been developed, as an extensible system, to provide imagery within the 0.4 to 14 micrometers spectral band with as high a physical fidelity as possible. it consists of a scene design tool, an image generator, that incorporates both radiosity and ray-tracing process, and an experimental trials tool. The scene design tool allows the user to develop a 3D representation of the scenario of interest from a fixed viewpoint. Target(s) of interest can be placed anywhere within this 3D representation and may be either static or moving. Different illumination conditions and effects of the atmosphere can be modeled together with directional reflectance effects. The user has complete control over the level of fidelity of the final image. The output from the rendering tool is a sequence of radiance maps, which may be used by sensor models or for experimental trials in which observers carry out target acquisition tasks. The software also maintains an audit trail of all data selected to generate a particular image, both in terms of material properties used and the rendering options chosen. A range of verification tests has shown that the software computes the correct values for analytically tractable scenarios. Validation test using simple scenes have also been undertaken. More complex validation tests using observer trials are planned. The current version of CAMEO-SIM and how its images are used for camouflage assessment is described. The verification and validation tests undertaken are discussed. In addition, example images will be used to demonstrate the significance of different effects, such as spectral rendering and shadows. Planned developments of CAMEO-SIM are also outlined.

  17. Estimation of left ventricular blood flow parameters: clinical application of patient-specific CFD simulations from 4D echocardiography

    NASA Astrophysics Data System (ADS)

    Larsson, David; Spühler, Jeannette H.; Günyeli, Elif; Weinkauf, Tino; Hoffman, Johan; Colarieti-Tosti, Massimiliano; Winter, Reidar; Larsson, Matilda

    2017-03-01

    Echocardiography is the most commonly used image modality in cardiology, assessing several aspects of cardiac viability. The importance of cardiac hemodynamics and 4D blood flow motion has recently been highlighted, however such assessment is still difficult using routine echo-imaging. Instead, combining imaging with computational fluid dynamics (CFD)-simulations has proven valuable, but only a few models have been applied clinically. In the following, patient-specific CFD-simulations from transthoracic dobutamin stress echocardiography have been used to analyze the left ventricular 4D blood flow in three subjects: two with normal and one with reduced left ventricular function. At each stress level, 4D-images were acquired using a GE Vivid E9 (4VD, 1.7MHz/3.3MHz) and velocity fields simulated using a presented pathway involving endocardial segmentation, valve position identification, and solution of the incompressible Navier-Stokes equation. Flow components defined as direct flow, delayed ejection flow, retained inflow, and residual volume were calculated by particle tracing using 4th-order Runge-Kutta integration. Additionally, systolic and diastolic average velocity fields were generated. Results indicated no major changes in average velocity fields for any of the subjects. For the two subjects with normal left ventricular function, increased direct flow, decreased delayed ejection flow, constant retained inflow, and a considerable drop in residual volume was seen at increasing stress. Contrary, for the subject with reduced left ventricular function, the delayed ejection flow increased whilst the retained inflow decreased at increasing stress levels. This feasibility study represents one of the first clinical applications of an echo-based patient-specific CFD-model at elevated stress levels, and highlights the potential of using echo-based models to capture highly transient flow events, as well as the ability of using simulation tools to study clinically complex phenomena. With larger patient studies planned for the future, and with the possibility of adding more anatomical features into the model framework, the current work demonstrates the potential of patient-specific CFD-models as a tool for quantifying 4D blood flow in the heart.

  18. Quantitative 3-D Imaging, Segmentation and Feature Extraction of the Respiratory System in Small Mammals for Computational Biophysics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trease, Lynn L.; Trease, Harold E.; Fowler, John

    2007-03-15

    One of the critical steps toward performing computational biology simulations, using mesh based integration methods, is in using topologically faithful geometry derived from experimental digital image data as the basis for generating the computational meshes. Digital image data representations contain both the topology of the geometric features and experimental field data distributions. The geometric features that need to be captured from the digital image data are three-dimensional, therefore the process and tools we have developed work with volumetric image data represented as data-cubes. This allows us to take advantage of 2D curvature information during the segmentation and feature extraction process.more » The process is basically: 1) segmenting to isolate and enhance the contrast of the features that we wish to extract and reconstruct, 2) extracting the geometry of the features in an isosurfacing technique, and 3) building the computational mesh using the extracted feature geometry. “Quantitative” image reconstruction and feature extraction is done for the purpose of generating computational meshes, not just for producing graphics "screen" quality images. For example, the surface geometry that we extract must represent a closed water-tight surface.« less

  19. Pseudo CT estimation from MRI using patch-based random forest

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Lei, Yang; Shu, Hui-Kuo; Rossi, Peter; Mao, Hui; Shim, Hyunsuk; Curran, Walter J.; Liu, Tian

    2017-02-01

    Recently, MR simulators gain popularity because of unnecessary radiation exposure of CT simulators being used in radiation therapy planning. We propose a method for pseudo CT estimation from MR images based on a patch-based random forest. Patient-specific anatomical features are extracted from the aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified using feature selection to train the random forest. The well-trained random forest is used to predict the pseudo CT of a new patient. This prediction technique was tested with human brain images and the prediction accuracy was assessed using the original CT images. Peak signal-to-noise ratio (PSNR) and feature similarity (FSIM) indexes were used to quantify the differences between the pseudo and original CT images. The experimental results showed the proposed method could accurately generate pseudo CT images from MR images. In summary, we have developed a new pseudo CT prediction method based on patch-based random forest, demonstrated its clinical feasibility, and validated its prediction accuracy. This pseudo CT prediction technique could be a useful tool for MRI-based radiation treatment planning and attenuation correction in a PET/MRI scanner.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, John Russell

    This grant funded the development and dissemination of the Photon Simulator (PhoSim) for the purpose of studying dark energy at high precision with the upcoming Large Synoptic Survey Telescope (LSST) astronomical survey. The work was in collaboration with the LSST Dark Energy Science Collaboration (DESC). Several detailed physics improvements were made in the optics, atmosphere, and sensor, a number of validation studies were performed, and a significant number of usability features were implemented. Future work in DESC will use PhoSim as the image simulation tool for data challenges used by the analysis groups.

  1. Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping

    NASA Technical Reports Server (NTRS)

    Leberl, F.

    1975-01-01

    Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.

  2. Pulse-wave propagation in straight-geometry vessels for stiffness estimation: theory, simulations, phantoms and in vitro findings.

    PubMed

    Shahmirzadi, Danial; Li, Ronny X; Konofagou, Elisa E

    2012-11-01

    Pulse wave imaging (PWI) is an ultrasound-based method for noninvasive characterization of arterial stiffness based on pulse wave propagation. Reliable numerical models of pulse wave propagation in normal and pathological aortas could serve as powerful tools for local pulse wave analysis and a guideline for PWI measurements in vivo. The objectives of this paper are to (1) apply a fluid-structure interaction (FSI) simulation of a straight-geometry aorta to confirm the Moens-Korteweg relationship between the pulse wave velocity (PWV) and the wall modulus, and (2) validate the simulation findings against phantom and in vitro results. PWI depicted and tracked the pulse wave propagation along the abdominal wall of canine aorta in vitro in sequential Radio-Frequency (RF) ultrasound frames and estimates the PWV in the imaged wall. The same system was also used to image multiple polyacrylamide phantoms, mimicking the canine measurements as well as modeling softer and stiffer walls. Finally, the model parameters from the canine and phantom studies were used to perform 3D two-way coupled FSI simulations of pulse wave propagation and estimate the PWV. The simulation results were found to correlate well with the corresponding Moens-Korteweg equation. A high linear correlation was also established between PWV² and E measurements using the combined simulation and experimental findings (R² =  0.98) confirming the relationship established by the aforementioned equation.

  3. Application for internal dosimetry using biokinetic distribution of photons based on nuclear medicine images*

    PubMed Central

    Leal Neto, Viriato; Vieira, José Wilson; Lima, Fernando Roberto de Andrade

    2014-01-01

    Objective This article presents a way to obtain estimates of dose in patients submitted to radiotherapy with basis on the analysis of regions of interest on nuclear medicine images. Materials and Methods A software called DoRadIo (Dosimetria das Radiações Ionizantes [Ionizing Radiation Dosimetry]) was developed to receive information about source organs and target organs, generating graphical and numerical results. The nuclear medicine images utilized in the present study were obtained from catalogs provided by medical physicists. The simulations were performed with computational exposure models consisting of voxel phantoms coupled with the Monte Carlo EGSnrc code. The software was developed with the Microsoft Visual Studio 2010 Service Pack and the project template Windows Presentation Foundation for C# programming language. Results With the mentioned tools, the authors obtained the file for optimization of Monte Carlo simulations using the EGSnrc; organization and compaction of dosimetry results with all radioactive sources; selection of regions of interest; evaluation of grayscale intensity in regions of interest; the file of weighted sources; and, finally, all the charts and numerical results. Conclusion The user interface may be adapted for use in clinical nuclear medicine as a computer-aided tool to estimate the administered activity. PMID:25741101

  4. DynamiX, numerical tool for design of next-generation x-ray telescopes.

    PubMed

    Chauvin, Maxime; Roques, Jean-Pierre

    2010-07-20

    We present a new code aimed at the simulation of grazing-incidence x-ray telescopes subject to deformations and demonstrate its ability with two test cases: the Simbol-X and the International X-ray Observatory (IXO) missions. The code, based on Monte Carlo ray tracing, computes the full photon trajectories up to the detector plane, accounting for the x-ray interactions and for the telescope motion and deformation. The simulation produces images and spectra for any telescope configuration using Wolter I mirrors and semiconductor detectors. This numerical tool allows us to study the telescope performance in terms of angular resolution, effective area, and detector efficiency, accounting for the telescope behavior. We have implemented an image reconstruction method based on the measurement of the detector drifts by an optical sensor metrology. Using an accurate metrology, this method allows us to recover the loss of angular resolution induced by the telescope instability. In the framework of the Simbol-X mission, this code was used to study the impacts of the parameters on the telescope performance. In this paper we present detailed performance analysis of Simbol-X, taking into account the satellite motions and the image reconstruction. To illustrate the versatility of the code, we present an additional performance analysis with a particular configuration of IXO.

  5. Simulation-based training in echocardiography.

    PubMed

    Biswas, Monodeep; Patel, Rajendrakumar; German, Charles; Kharod, Anant; Mohamed, Ahmed; Dod, Harvinder S; Kapoor, Poonam Malhotra; Nanda, Navin C

    2016-10-01

    The knowledge gained from echocardiography is paramount for the clinician in diagnosing, interpreting, and treating various forms of disease. While cardiologists traditionally have undergone training in this imaging modality during their fellowship, many other specialties are beginning to show interest as well, including intensive care, anesthesia, and primary care trainees, in both transesophageal and transthoracic echocardiography. Advances in technology have led to the development of simulation programs accessible to trainees to help gain proficiency in the nuances of obtaining quality images, in a low stress, pressure free environment, often with a functioning ultrasound probe and mannequin that can mimic many of the pathologies seen in living patients. Although there are various training simulation programs each with their own benefits and drawbacks, it is clear that these programs are a powerful tool in educating the trainee and likely will lead to improved patient outcomes. © 2016, Wiley Periodicals, Inc.

  6. Dosimetry in MARS spectral CT: TOPAS Monte Carlo simulations and ion chamber measurements.

    PubMed

    Lu, Gray; Marsh, Steven; Damet, Jerome; Carbonez, Pierre; Laban, John; Bateman, Christopher; Butler, Anthony; Butler, Phil

    2017-06-01

    Spectral computed tomography (CT) is an up and coming imaging modality which shows great promise in revealing unique diagnostic information. Because this imaging modality is based on X-ray CT, it is of utmost importance to study the radiation dose aspects of its use. This study reports on the implementation and evaluation of a Monte Carlo simulation tool using TOPAS for estimating dose in a pre-clinical spectral CT scanner known as the MARS scanner. Simulated estimates were compared with measurements from an ionization chamber. For a typical MARS scan, TOPAS estimated for a 30 mm diameter cylindrical phantom a CT dose index (CTDI) of 29.7 mGy; CTDI was measured by ion chamber to within 3% of TOPAS estimates. Although further development is required, our investigation of TOPAS for estimating MARS scan dosimetry has shown its potential for further study of spectral scanning protocols and dose to scanned objects.

  7. Optical design and simulation of a new coherence beamline at NSLS-II

    NASA Astrophysics Data System (ADS)

    Williams, Garth J.; Chubar, Oleg; Berman, Lonny; Chu, Yong S.; Robinson, Ian K.

    2017-08-01

    We will discuss the optical design for a proposed beamline at NSLS-II, a late-third generation storage ring source, designed to exploit the spatial coherence of the X-rays to extract high-resolution spatial information from ordered and disordered materials through Coherent Diffractive Imaging, executed in the Bragg- and forward-scattering geometries. This technique offers a powerful tool to image sub-10 nm spatial features and, within ordered materials, sub-Angstrom mapping of deformation fields. Driven by the opportunity to apply CDI to a wide range of samples, with sizes ranging from sub-micron to tens-of-microns, two optical designs have been proposed and simulated under a wide variety of optical configurations using the software package Synchrotron Radiation Workshop. The designs, their goals, and the results of the simulation, including NSLS-II ring and undulator source parameters, of the beamline performance as a function of its variable optical components is described.

  8. 3-D FDTD simulation of shear waves for evaluation of complex modulus imaging.

    PubMed

    Orescanin, Marko; Wang, Yue; Insana, Michael

    2011-02-01

    The Navier equation describing shear wave propagation in 3-D viscoelastic media is solved numerically with a finite differences time domain (FDTD) method. Solutions are formed in terms of transverse scatterer velocity waves and then verified via comparison to measured wave fields in heterogeneous hydrogel phantoms. The numerical algorithm is used as a tool to study the effects on complex shear modulus estimation from wave propagation in heterogeneous viscoelastic media. We used an algebraic Helmholtz inversion (AHI) technique to solve for the complex shear modulus from simulated and experimental velocity data acquired in 2-D and 3-D. Although 3-D velocity estimates are required in general, there are object geometries for which 2-D inversions provide accurate estimations of the material properties. Through simulations and experiments, we explored artifacts generated in elastic and dynamic-viscous shear modulus images related to the shear wavelength and average viscosity.

  9. Design and simulation of a sensor for heliostat field closed loop control

    NASA Astrophysics Data System (ADS)

    Collins, Mike; Potter, Daniel; Burton, Alex

    2017-06-01

    Significant research has been completed in pursuit of capital cost reductions for heliostats [1],[2]. The camera array closed loop control concept has potential to radically alter the way heliostats are controlled and installed by replacing high quality open loop targeting systems with low quality targeting devices that rely on measurement of image position to remove tracking errors during operation. Although the system could be used for any heliostat size, the system significantly benefits small heliostats by reducing actuation costs, enabling large numbers of heliostats to be calibrated simultaneously, and enabling calibration of heliostats that produce low irradiance (similar or less than ambient light images) on Lambertian calibration targets, such as small heliostats that are far from the tower. A simulation method for the camera array has been designed and verified experimentally. The simulation tool demonstrates that closed loop calibration or control is possible using this device.

  10. Determination of female breast tumor and its parameter estimation by thermal simulation

    NASA Astrophysics Data System (ADS)

    Chen, Xin-guang; Xu, A.-qing; Yang, Hong-qin; Wang, Yu-hua; Xie, Shu-sen

    2010-02-01

    Thermal imaging is an emerging method for early detection of female breast tumor. The main challenge for thermal imaging used in breast clinics lies in how to detect or locate the tumor and obtain its related parameters. The purpose of this study is to apply an improved method which combined a genetic algorithm with finite element thermal analysis to determine the breast tumor and its parameters, such as the size, location, metabolic heat generation and blood perfusion rate. A finite element model for breast embedded a tumor was used to investigate the temperature distribution, and then the influences of tumor metabolic heat generation, tumor location and tumor size on the temperature were studied by use of an improved genetic algorithm. The results show that thermal imaging is a potential and effective detection tool for early breast tumor, and thermal simulation may be helpful for the explanation of breast thermograms.

  11. High speed stereovision setup for position and motion estimation of fertilizer particles leaving a centrifugal spreader.

    PubMed

    Hijazi, Bilal; Cool, Simon; Vangeyte, Jürgen; Mertens, Koen C; Cointault, Frédéric; Paindavoine, Michel; Pieters, Jan G

    2014-11-13

    A 3D imaging technique using a high speed binocular stereovision system was developed in combination with corresponding image processing algorithms for accurate determination of the parameters of particles leaving the spinning disks of centrifugal fertilizer spreaders. Validation of the stereo-matching algorithm using a virtual 3D stereovision simulator indicated an error of less than 2 pixels for 90% of the particles. The setup was validated using the cylindrical spread pattern of an experimental spreader. A 2D correlation coefficient of 90% and a Relative Error of 27% was found between the experimental results and the (simulated) spread pattern obtained with the developed setup. In combination with a ballistic flight model, the developed image acquisition and processing algorithms can enable fast determination and evaluation of the spread pattern which can be used as a tool for spreader design and precise machine calibration.

  12. Validation results of satellite mock-up capturing experiment using nets

    NASA Astrophysics Data System (ADS)

    Medina, Alberto; Cercós, Lorenzo; Stefanescu, Raluca M.; Benvenuto, Riccardo; Pesce, Vincenzo; Marcon, Marco; Lavagna, Michèle; González, Iván; Rodríguez López, Nuria; Wormnes, Kjetil

    2017-05-01

    The PATENDER activity (Net parametric characterization and parabolic flight), funded by the European Space Agency (ESA) via its Clean Space initiative, was aiming to validate a simulation tool for designing nets for capturing space debris. This validation has been performed through a set of different experiments under microgravity conditions where a net was launched capturing and wrapping a satellite mock-up. This paper presents the architecture of the thrown-net dynamics simulator together with the set-up of the deployment experiment and its trajectory reconstruction results on a parabolic flight (Novespace A-310, June 2015). The simulator has been implemented within the Blender framework in order to provide a highly configurable tool, able to reproduce different scenarios for Active Debris Removal missions. The experiment has been performed over thirty parabolas offering around 22 s of zero-g conditions. Flexible meshed fabric structure (the net) ejected from a container and propelled by corner masses (the bullets) arranged around its circumference have been launched at different initial velocities and launching angles using a pneumatic-based dedicated mechanism (representing the chaser satellite) against a target mock-up (the target satellite). High-speed motion cameras were recording the experiment allowing 3D reconstruction of the net motion. The net knots have been coloured to allow the images post-process using colour segmentation, stereo matching and iterative closest point (ICP) for knots tracking. The final objective of the activity was the validation of the net deployment and wrapping simulator using images recorded during the parabolic flight. The high-resolution images acquired have been post-processed to determine accurately the initial conditions and generate the reference data (position and velocity of all knots of the net along its deployment and wrapping of the target mock-up) for the simulator validation. The simulator has been properly configured according to the parabolic flight scenario, and executed in order to generate the validation data. Both datasets have been compared according to different metrics in order to perform the validation of the PATENDER simulator.

  13. Image simulation for HardWare In the Loop simulation in EO domain

    NASA Astrophysics Data System (ADS)

    Cathala, Thierry; Latger, Jean

    2015-10-01

    Infrared camera as a weapon sub system for automatic guidance is a key component for military carrier such as missile for example. The associated Image Processing, that controls the navigation, needs to be intensively assessed. Experimentation in the real world is very expensive. This is the main reason why hybrid simulation also called HardWare In the Loop (HWIL) is more and more required nowadays. In that field, IR projectors are able to cast IR fluxes of photons directly onto the IR camera of a given weapon system, typically a missile seeker head. Though in laboratory, the missile is so stimulated exactly like in the real world, provided a realistic simulation tool enables to perform synthetic images to be displayed by the IR projectors. The key technical challenge is to render the synthetic images at the required frequency. This paper focuses on OKTAL-SE experience in this domain through its product SE-FAST-HWIL. It shows the methodology and Return of Experience from OKTAL-SE. Examples are given, in the frame of the SE-Workbench. The presentation focuses on trials on real operational complex 3D cases. In particular, three important topics, that are very sensitive with regards to IG performance, are detailed: first the 3D sea surface representation, then particle systems rendering especially to simulate flares and at last sensor effects modelling. Beyond "projection mode", some information will be given on the SE-FAST-HWIL new capabilities dedicated to "injection mode".

  14. Buried Object Classification using a Sediment Volume Imaging SAS and Electromagnetic Gradiometer

    DTIC Science & Technology

    2006-09-01

    field data with simulated RTG data using AST’s in-house magnetic modeling tool EMAGINE . Given a set of input dipole moments, or pa- rameters to...approximate a moment by assuming the object is a prolate ellipsoid shell, EMAGINE uses Green’s func- tion formulations to generate three-component

  15. Operative simulation of anterior clinoidectomy using a rapid prototyping model molded by a three-dimensional printer.

    PubMed

    Okonogi, Shinichi; Kondo, Kosuke; Harada, Naoyuki; Masuda, Hiroyuki; Nemoto, Masaaki; Sugo, Nobuo

    2017-09-01

    As the anatomical three-dimensional (3D) positional relationship around the anterior clinoid process (ACP) is complex, experience of many surgeries is necessary to understand anterior clinoidectomy (AC). We prepared a 3D synthetic image from computed tomographic angiography (CTA) and magnetic resonance imaging (MRI) data and a rapid prototyping (RP) model from the imaging data using a 3D printer. The objective of this study was to evaluate anatomical reproduction of the 3D synthetic image and intraosseous region after AC in the RP model. In addition, the usefulness of the RP model for operative simulation was investigated. The subjects were 51 patients who were examined by CTA and MRI before surgery. The size of the ACP, thickness and length of the optic nerve and artery, and intraosseous length after AC were measured in the 3D synthetic image and RP model, and reproducibility in the RP model was evaluated. In addition, 10 neurosurgeons performed AC in the completed RP models to investigate their usefulness for operative simulation. The RP model reproduced the region in the vicinity of the ACP in the 3D synthetic image, including the intraosseous region, at a high accuracy. In addition, drilling of the RP model was a useful operative simulation method of AC. The RP model of the vicinity of ACP, prepared using a 3D printer, showed favorable anatomical reproducibility, including reproduction of the intraosseous region. In addition, it was concluded that this RP model is useful as a surgical education tool for drilling.

  16. Faint source detection in ISOCAM images

    NASA Astrophysics Data System (ADS)

    Starck, J. L.; Aussel, H.; Elbaz, D.; Fadda, D.; Cesarsky, C.

    1999-08-01

    We present a tool adapted to the detection of faint mid-infrared sources within ISOCAM mosaics. This tool is based on a wavelet analysis which allows us to discriminate sources from cosmic ray impacts at the very limit of the instrument, four orders of magnitudes below IRAS. It is called PRETI for Pattern REcognition Technique for ISOCAM data, because glitches with transient behaviors are isolated in the wavelet space, i.e. frequency space, where they present peculiar signatures in the form of patterns automatically identified and then reconstructed. We have tested PRETI with Monte-Carlo simulations of fake ISOCAM data. These simulations allowed us to define the fraction of remaining false sources due to cosmic rays, the sensitivity and completeness limits as well as the photometric accuracy as a function of the observation parameters. Although the main scientific applications of this technique have appeared or will appear in separated papers, we present here an application to the ISOCAM-Hubble Deep Field image. This work completes and confirms the results already published (\\cite[Aussel et al. 1999]{starck:aussel99}).

  17. Magnetic resonance imaging of rodent spinal cord with an improved performance coil at 7 Tesla

    NASA Astrophysics Data System (ADS)

    Solis-Najera, S. E.; Rodriguez, A. O.

    2014-11-01

    Magnetic Resonance Imaging of animal models provide reliable means to study human diseases. The image acquisition particularly determined by the radio frequency coil to detect the signal emanated from a particular region of interest. A scaled-down version of the slotted surface coil was built based on the previous results of a magnetron-type surface coil for human applications. Our coil prototype had a 2 cm total diameter and six circular slots and was developed for murine spinal cord at 7 T. Electromagnetic simulations of the slotted and circular coils were also performed to compute the spatially dependent magnetic and electric fields using a simulated saline-solution sphere. The quality factor of both coils was experimentally measured giving a lower noise figure and a higher quality factor for the slotted coil outperforming the circular coil. Images of the spinal cord of a rat were acquired using standard pulse sequences. The slotted surface coil can be a good tool for spinal cord rat imaging using conventional pulse sequences at 7 T.

  18. Point-cloud-to-point-cloud technique on tool calibration for dental implant surgical path tracking

    NASA Astrophysics Data System (ADS)

    Lorsakul, Auranuch; Suthakorn, Jackrit; Sinthanayothin, Chanjira

    2008-03-01

    Dental implant is one of the most popular methods of tooth root replacement used in prosthetic dentistry. Computerize navigation system on a pre-surgical plan is offered to minimize potential risk of damage to critical anatomic structures of patients. Dental tool tip calibrating is basically an important procedure of intraoperative surgery to determine the relation between the hand-piece tool tip and hand-piece's markers. With the transferring coordinates from preoperative CT data to reality, this parameter is a part of components in typical registration problem. It is a part of navigation system which will be developed for further integration. A high accuracy is required, and this relation is arranged by point-cloud-to-point-cloud rigid transformations and singular value decomposition (SVD) for minimizing rigid registration errors. In earlier studies, commercial surgical navigation systems from, such as, BrainLAB and Materialize, have flexibility problem on tool tip calibration. Their systems either require a special tool tip calibration device or are unable to change the different tool. The proposed procedure is to use the pointing device or hand-piece to touch on the pivot and the transformation matrix. This matrix is calculated every time when it moves to the new position while the tool tip stays at the same point. The experiment acquired on the information of tracking device, image acquisition and image processing algorithms. The key success is that point-to-point-cloud requires only 3 post images of tool to be able to converge to the minimum errors 0.77%, and the obtained result is correct in using the tool holder to track the path simulation line displayed in graphic animation.

  19. SOFT: a synthetic synchrotron diagnostic for runaway electrons

    NASA Astrophysics Data System (ADS)

    Hoppe, M.; Embréus, O.; Tinguely, R. A.; Granetz, R. S.; Stahl, A.; Fülöp, T.

    2018-02-01

    Improved understanding of the dynamics of runaway electrons can be obtained by measurement and interpretation of their synchrotron radiation emission. Models for synchrotron radiation emitted by relativistic electrons are well established, but the question of how various geometric effects—such as magnetic field inhomogeneity and camera placement—influence the synchrotron measurements and their interpretation remains open. In this paper we address this issue by simulating synchrotron images and spectra using the new synthetic synchrotron diagnostic tool SOFT (Synchrotron-detecting Orbit Following Toolkit). We identify the key parameters influencing the synchrotron radiation spot and present scans in those parameters. Using a runaway electron distribution function obtained by Fokker-Planck simulations for parameters from an Alcator C-Mod discharge, we demonstrate that the corresponding synchrotron image is well-reproduced by SOFT simulations, and we explain how it can be understood in terms of the parameter scans. Geometric effects are shown to significantly influence the synchrotron spectrum, and we show that inherent inconsistencies in a simple emission model (i.e. not modeling detection) can lead to incorrect interpretation of the images.

  20. Visual acuity estimation from simulated images

    NASA Astrophysics Data System (ADS)

    Duncan, William J.

    Simulated images can provide insight into the performance of optical systems, especially those with complicated features. Many modern solutions for presbyopia and cataracts feature sophisticated power geometries or diffractive elements. Some intraocular lenses (IOLs) arrive at multifocality through the use of a diffractive surface and multifocal contact lenses have a radially varying power profile. These type of elements induce simultaneous vision as well as affecting vision much differently than a monofocal ophthalmic appliance. With myriad multifocal ophthalmics available on the market it is difficult to compare or assess performance in ways that effect wearers of such appliances. Here we present software and algorithmic metrics that can be used to qualitatively and quantitatively compare ophthalmic element performance, with specific examples of bifocal intraocular lenses (IOLs) and multifocal contact lenses. We anticipate this study, methods, and results to serve as a starting point for more complex models of vision and visual acuity in a setting where modeling is advantageous. Generating simulated images of real- scene scenarios is useful for patients in assessing vision quality with a certain appliance. Visual acuity estimation can serve as an important tool for manufacturing and design of ophthalmic appliances.

  1. Validation of columnar CsI x-ray detector responses obtained with hybridMANTIS, a CPU-GPU Monte Carlo code for coupled x-ray, electron, and optical transport.

    PubMed

    Sharma, Diksha; Badano, Aldo

    2013-03-01

    hybridMANTIS is a Monte Carlo package for modeling indirect x-ray imagers using columnar geometry based on a hybrid concept that maximizes the utilization of available CPU and graphics processing unit processors in a workstation. The authors compare hybridMANTIS x-ray response simulations to previously published MANTIS and experimental data for four cesium iodide scintillator screens. These screens have a variety of reflective and absorptive surfaces with different thicknesses. The authors analyze hybridMANTIS results in terms of modulation transfer function and calculate the root mean square difference and Swank factors from simulated and experimental results. The comparison suggests that hybridMANTIS better matches the experimental data as compared to MANTIS, especially at high spatial frequencies and for the thicker screens. hybridMANTIS simulations are much faster than MANTIS with speed-ups up to 5260. hybridMANTIS is a useful tool for improved description and optimization of image acquisition stages in medical imaging systems and for modeling the forward problem in iterative reconstruction algorithms.

  2. Simulation of the fading of photographic three-color materials: a new tool for the preservation field

    NASA Astrophysics Data System (ADS)

    Frey, Franziska S.; Gschwind, Rudolf; Reilly, James M.

    1995-04-01

    Photography and motion pictures play an important role in our society as information carriers, artistic medium, and historical document, representing cultural values which have to be preserved. The emerging electronic imaging techniques help in developing new methods to accomplish this goal. The dyes of common photographic three-color materials are chemically rather unstable. Both the thermodynamic and the photochemical stability is low. As a result, millions of photographs and thousands of films deteriorate, if not preserved and stored under optimal conditions. It is of great interest to curators of museums that house photographic or cinematographic collections to simulate and visualize the fading process. A multimedia production including images and further information offers a direct and convincing way to demonstrate the different effects of various storage alternatives on dye loss. This project is an example of an interdisciplinary approach that includes photography, conservation, and computer science. The simulation program used for the creation of the faded images is based on algorithms developed for the reconstruction of faded color photographic materials.

  3. GERLUMPH Data Release 2: 2.5 Billion Simulated Microlensing Light Curves

    NASA Astrophysics Data System (ADS)

    Vernardos, G.; Fluke, C. J.; Bate, N. F.; Croton, D.; Vohl, D.

    2015-04-01

    In the upcoming synoptic all-sky survey era of astronomy, thousands of new multiply imaged quasars are expected to be discovered and monitored regularly. Light curves from the images of gravitationally lensed quasars are further affected by superimposed variability due to microlensing. In order to disentangle the microlensing from the intrinsic variability of the light curves, the time delays between the multiple images have to be accurately measured. The resulting microlensing light curves can then be analyzed to reveal information about the background source, such as the size of the quasar accretion disk. In this paper we present the most extensive and coherent collection of simulated microlensing light curves; we have generated \\gt 2.5 billion light curves using the GERLUMPH high resolution microlensing magnification maps. Our simulations can be used to train algorithms to measure lensed quasar time delays, plan future monitoring campaigns, and study light curve properties throughout parameter space. Our data are openly available to the community and are complemented by online eResearch tools, located at http://gerlumph.swin.edu.au.

  4. Analysis and numerical simulation of an aircraft icing episode near Adolfo Suárez Madrid-Barajas International Airport

    NASA Astrophysics Data System (ADS)

    Bolgiani, Pedro; Fernández-González, Sergio; Martin, María Luisa; Valero, Francisco; Merino, Andrés; García-Ortega, Eduardo; Sánchez, José Luis

    2018-02-01

    Aircraft icing is one of the most dangerous weather phenomena in aviation security. Therefore, avoiding areas with high probability of icing episodes along arrival and departure routes to airports is strongly recommended. Although such icing is common, forecasting and observation are far from perfect. This paper presents an analysis of an aircraft icing and turbulence event including a commercial flight near the Guadarrama Mountains, during the aircraft approach to the airport. No reference to icing or turbulence was made in the pre-flight meteorological information provided to the pilot, highlighting the need for additional tools to predict such risks. For this reason, the icing episode is simulated by means of the Weather Research and Forecasting (WRF) model and analyzed using images from the Meteosat Second Generation (MSG) satellite, with the aim of providing tools for the detection of icing and turbulence in the airport vicinity. The WRF simulation shows alternating updrafts and downdrafts (> 2 m s- 1) on the lee side of the mountain barrier. This is consonant with moderate to strong turbulence experienced by the aircraft on its approach path to the airport and suggests clear air turbulence above the mountain wave cloud top. At the aircraft icing altitude, supercooled liquid water associated with orographic clouds and mountain waves is simulated. Daytime and nighttime MSG images corroborated the simulated mountain waves and associated supercooled liquid water. The results encourage the use of mesoscale models and MSG nowcasting information to minimize aviation risks associated with such meteorological phenomena.

  5. Colour for Behavioural Success.

    PubMed

    Dresp-Langley, Birgitta; Reeves, Adam

    2018-01-01

    Colour information not only helps sustain the survival of animal species by guiding sexual selection and foraging behaviour but also is an important factor in the cultural and technological development of our own species. This is illustrated by examples from the visual arts and from state-of-the-art imaging technology, where the strategic use of colour has become a powerful tool for guiding the planning and execution of interventional procedures. The functional role of colour information in terms of its potential benefits to behavioural success across the species is addressed in the introduction here to clarify why colour perception may have evolved to generate behavioural success. It is argued that evolutionary and environmental pressures influence not only colour trait production in the different species but also their ability to process and exploit colour information for goal-specific purposes. We then leap straight to the human primate with insight from current research on the facilitating role of colour cues on performance training with precision technology for image-guided surgical planning and intervention. It is shown that local colour cues in two-dimensional images generated by a surgical fisheye camera help individuals become more precise rapidly across a limited number of trial sets in simulator training for specific manual gestures with a tool. This facilitating effect of a local colour cue on performance evolution in a video-controlled simulator (pick-and-place) task can be explained in terms of colour-based figure-ground segregation facilitating attention to local image parts when more than two layers of subjective surface depth are present, as in all natural and surgical images.

  6. Colour for Behavioural Success

    PubMed Central

    Reeves, Adam

    2018-01-01

    Colour information not only helps sustain the survival of animal species by guiding sexual selection and foraging behaviour but also is an important factor in the cultural and technological development of our own species. This is illustrated by examples from the visual arts and from state-of-the-art imaging technology, where the strategic use of colour has become a powerful tool for guiding the planning and execution of interventional procedures. The functional role of colour information in terms of its potential benefits to behavioural success across the species is addressed in the introduction here to clarify why colour perception may have evolved to generate behavioural success. It is argued that evolutionary and environmental pressures influence not only colour trait production in the different species but also their ability to process and exploit colour information for goal-specific purposes. We then leap straight to the human primate with insight from current research on the facilitating role of colour cues on performance training with precision technology for image-guided surgical planning and intervention. It is shown that local colour cues in two-dimensional images generated by a surgical fisheye camera help individuals become more precise rapidly across a limited number of trial sets in simulator training for specific manual gestures with a tool. This facilitating effect of a local colour cue on performance evolution in a video-controlled simulator (pick-and-place) task can be explained in terms of colour-based figure-ground segregation facilitating attention to local image parts when more than two layers of subjective surface depth are present, as in all natural and surgical images. PMID:29770183

  7. Physics-based interactive volume manipulation for sharing surgical process.

    PubMed

    Nakao, Megumi; Minato, Kotaro

    2010-05-01

    This paper presents a new set of techniques by which surgeons can interactively manipulate patient-specific volumetric models for sharing surgical process. To handle physical interaction between the surgical tools and organs, we propose a simple surface-constraint-based manipulation algorithm to consistently simulate common surgical manipulations such as grasping, holding and retraction. Our computation model is capable of simulating soft-tissue deformation and incision in real time. We also present visualization techniques in order to rapidly visualize time-varying, volumetric information on the deformed image. This paper demonstrates the success of the proposed methods in enabling the simulation of surgical processes, and the ways in which this simulation facilitates preoperative planning and rehearsal.

  8. Mining MaNGA for Merging Galaxies: A New Imaging and Kinematic Technique from Hydrodynamical Simulations

    NASA Astrophysics Data System (ADS)

    Nevin, Becky; Comerford, Julia M.; Blecha, Laura

    2018-06-01

    Merging galaxies play a key role in galaxy evolution, and progress in our understanding of galaxy evolution is slowed by the difficulty of making accurate galaxy merger identifications. Mergers are typically identified using imaging alone, which has its limitations and biases. With the growing popularity of integral field spectroscopy (IFS), it is now possible to use kinematic signatures to improve galaxy merger identifications. I use GADGET-3 hydrodynamical simulations of merging galaxies with the radiative transfer code SUNRISE, the later of which enables me to apply the same analysis to simulations and observations. From the simulated galaxies, I have developed the first merging galaxy classification scheme that is based on kinematics and imaging. Utilizing a Linear Discriminant Analysis tool, I have determined which kinematic and imaging predictors are most useful for identifying mergers of various merger parameters (such as orientation, mass ratio, gas fraction, and merger stage). I will discuss the strengths and limitations of the classification technique and then my initial results for applying the classification to the >10,000 observed galaxies in the MaNGA (Mapping Nearby Galaxies at Apache Point) IFS survey. Through accurate identification of merging galaxies in the MaNGA survey, I will advance our understanding of supermassive black hole growth in galaxy mergers and other open questions related to galaxy evolution.

  9. Monte Carlo simulation of secondary electron images for gold nanorods on the silicon substrate

    NASA Astrophysics Data System (ADS)

    Zhang, P.

    2018-06-01

    Recently, gold nanorods (Au NRs) have attracted much attention because at a particular photoelectricity the gold nanorods present a characteristic which is different from other types of Au nanomaterials with various shapes. Accurate measurement of aspect ratios does provide very high value of optical property for Au NRs. Monte Carlo (MC) simulation is thought of as the most accurate tool to perform size measurement through extracting structure parameters from the simulated scanning electron microscopy (SEM) image which best matches the experimental one. In this article, a series of MC-simulated secondary electron (SE) images have been taken for Au NRs on a silicon substrate. However, it has already been observed that the two ends of Au NRs in the experimental SEM image is brighter than that of the middle part. It seriously affects the accuracy of size measurement for Au NRs. The purpose of this work is to understand the mechanism underlying this phenomenon through a series of systematical analysis. It was found that the cetyltrimethylammonium bromide (CTAB) which covers the Au NRs indeed can alter the contrast of Au NRs compared to that without CTAB covering. However, SEs emitting from CTAB are not the reason for the abnormal brightness at the two ends of NRs. This work reveals that the charging effect might be the leading cause for this phenomenon.

  10. Docker Container Manager: A Simple Toolkit for Isolated Work with Shared Computational, Storage, and Network Resources

    NASA Astrophysics Data System (ADS)

    Polyakov, S. P.; Kryukov, A. P.; Demichev, A. P.

    2018-01-01

    We present a simple set of command line interface tools called Docker Container Manager (DCM) that allow users to create and manage Docker containers with preconfigured SSH access while keeping the users isolated from each other and restricting their access to the Docker features that could potentially disrupt the work of the server. Users can access DCM server via SSH and are automatically redirected to DCM interface tool. From there, they can create new containers, stop, restart, pause, unpause, and remove containers and view the status of the existing containers. By default, the containers are also accessible via SSH using the same private key(s) but through different server ports. Additional publicly available ports can be mapped to the respective ports of a container, allowing for some network services to be run within it. The containers are started from read-only filesystem images. Some initial images must be provided by the DCM server administrators, and after containers are configured to meet one’s needs, the changes can be saved as new images. Users can see the available images and remove their own images. DCM server administrators are provided with commands to create and delete users. All commands were implemented as Python scripts. The tools allow to deploy and debug medium-sized distributed systems for simulation in different fields on one or several local computers.

  11. A novel teaching tool using dynamic cues improves visualisation of chest lesions by naive observers

    NASA Astrophysics Data System (ADS)

    Mohamed Ali, M. A.; Toomey, R. J.; Ryan, J. T.; Cuffe, F. C.; Brennan, P. C.

    2009-02-01

    Introduction Dynamic cueing is an effective way of stimulating perception of regions of interest within radiological images. This study explores the impact of a novel teaching tool using dynamic cueing for lesion detection on plain chest radiographs. Materials and methods Observer performance studies were carried out where 36 novices examined 30 chest images in random order. Half of these contained between one and three simulated pulmonary nodules. Three groups were investigated: A (control: no teaching tool), B (retested immediately after undergoing the teaching tool) and C (retested a week after undergoing the teaching tool). The teaching tool involved dynamically displaying the same images with and without lesions. Results were compared using Receiver Operating Characteristics (ROC), sensitivity and specificity analyses. Results The second reading showed significantly greater area under the ROC curve (Az value) (p<0.0001) and higher sensitivity value (p=0.004) compared to the first reading for Group B. No differences between readings were demonstrated for groups A or C. When the magnitudes of the above changes were compared between Group B and the other two groups, greater changes in Az value for Group B were noted (B vs. A:p=0.0003, B vs. C:p=0.0005). For sensitivity, when Group B was compared to Group A, the magnitude of the change was significantly greater (p=0.0029) whereas when Group B was compared to Group C, the magnitude change demonstrated a level approaching significance (p=0.0768). Conclusions The novel teaching tool improves identification of pulmonary nodular lesions on chest radiographs in the short term.

  12. PACS-based interface for 3D anatomical structure visualization and surgical planning

    NASA Astrophysics Data System (ADS)

    Koehl, Christophe; Soler, Luc; Marescaux, Jacques

    2002-05-01

    The interpretation of radiological image is routine but it remains a rather difficult task for physicians. It requires complex mental processes, that permit translation from 2D slices into 3D localization and volume determination of visible diseases. An easier and more extensive visualization and exploitation of medical images can be reached through the use of computer-based systems that provide real help from patient admission to post-operative followup. In this way, we have developed a 3D visualization interface linked to a PACS database that allows manipulation and interaction on virtual organs delineated from CT-scan or MRI. This software provides the 3D real-time surface rendering of anatomical structures, an accurate evaluation of volumes and distances and the improvement of radiological image analysis and exam annotation through a negatoscope tool. It also provides a tool for surgical planning allowing the positioning of an interactive laparoscopic instrument and the organ resection. The software system could revolutionize the field of computerized imaging technology. Indeed, it provides a handy and portable tool for pre-operative and intra-operative analysis of anatomy and pathology in various medical fields. This constitutes the first step of the future development of augmented reality and surgical simulation systems.

  13. Simulation of computed tomography dose based on voxel phantom

    NASA Astrophysics Data System (ADS)

    Liu, Chunyu; Lv, Xiangbo; Li, Zhaojun

    2017-01-01

    Computed Tomography (CT) is one of the preferred and the most valuable imaging tool used in diagnostic radiology, which provides a high-quality cross-sectional image of the body. It still causes higher doses of radiation to patients comparing to the other radiological procedures. The Monte-Carlo method is appropriate for estimation of the radiation dose during the CT examinations. The simulation of the Computed Tomography Dose Index (CTDI) phantom was developed in this paper. Under a similar conditions used in physical measurements, dose profiles were calculated and compared against the measured values that were reported. The results demonstrate a good agreement between the calculated and the measured doses. From different CT exam simulations using the voxel phantom, the highest absorbed dose was recorded for the lung, the brain, the bone surface. A comparison between the different scan type shows that the effective dose for a chest scan is the highest one, whereas the effective dose values during abdomen and pelvis scan are very close, respectively. The lowest effective dose resulted from the head scan. Although, the dose in CT is related to various parameters, such as the tube current, exposure time, beam energy, slice thickness and patient size, this study demonstrates that the MC simulation is a useful tool to accurately estimate the dose delivered to any specific organs for patients undergoing the CT exams and can be also a valuable technique for the design and the optimization of the CT x-ray source.

  14. Low-power, high-speed 1-bit inexact Full Adder cell designs applicable to low-energy image processing

    NASA Astrophysics Data System (ADS)

    Zareei, Zahra; Navi, Keivan; Keshavarziyan, Peiman

    2018-03-01

    In this paper, three novel low-power and high-speed 1-bit inexact Full Adder cell designs are presented based on current mode logic in 32 nm carbon nanotube field effect transistor technology for the first time. The circuit-level figures of merits, i.e. power, delay and power-delay product as well as application-level metric such as error distance, are considered to assess the efficiency of the proposed cells over their counterparts. The effect of voltage scaling and temperature variation on the proposed cells is studied using HSPICE tool. Moreover, using MATLAB tool, the peak signal to noise ratio of the proposed cells is evaluated in an image-processing application referred to as motion detector. Simulation results confirm the efficiency of the proposed cells.

  15. Dynamic heart phantom with functional mitral and aortic valves

    NASA Astrophysics Data System (ADS)

    Vannelli, Claire; Moore, John; McLeod, Jonathan; Ceh, Dennis; Peters, Terry

    2015-03-01

    Cardiac valvular stenosis, prolapse and regurgitation are increasingly common conditions, particularly in an elderly population with limited potential for on-pump cardiac surgery. NeoChord©, MitraClipand numerous stent-based transcatheter aortic valve implantation (TAVI) devices provide an alternative to intrusive cardiac operations; performed while the heart is beating, these procedures require surgeons and cardiologists to learn new image-guidance based techniques. Developing these visual aids and protocols is a challenging task that benefits from sophisticated simulators. Existing models lack features needed to simulate off-pump valvular procedures: functional, dynamic valves, apical and vascular access, and user flexibility for different activation patterns such as variable heart rates and rapid pacing. We present a left ventricle phantom with these characteristics. The phantom can be used to simulate valvular repair and replacement procedures with magnetic tracking, augmented reality, fluoroscopy and ultrasound guidance. This tool serves as a platform to develop image-guidance and image processing techniques required for a range of minimally invasive cardiac interventions. The phantom mimics in vivo mitral and aortic valve motion, permitting realistic ultrasound images of these components to be acquired. It also has a physiological realistic left ventricular ejection fraction of 50%. Given its realistic imaging properties and non-biodegradable composition—silicone for tissue, water for blood—the system promises to reduce the number of animal trials required to develop image guidance applications for valvular repair and replacement. The phantom has been used in validation studies for both TAVI image-guidance techniques1, and image-based mitral valve tracking algorithms2.

  16. Evaluating the Usefulness of High-Temporal Resolution Vegetation Indices to Identify Crop Types

    NASA Astrophysics Data System (ADS)

    Hilbert, K.; Lewis, D.; O'Hara, C. G.

    2006-12-01

    The National Aeronautical and Space Agency (NASA) and the United States Department of Agriculture (USDA) jointly sponsored research covering the 2004 to 2006 South American crop seasons that focused on developing methods for the USDA's Foreign Agricultural Service's (FAS) Production Estimates and Crop Assessment Division (PECAD) to identify crop types using MODIS-derived, hyper-temporal Normalized Difference Vegetation Index (NDVI) images. NDVI images were composited in 8 day intervals from daily NDVI images and aggregated to create a hyper-termporal NDVI layerstack. This NDVI layerstack was used as input to image classification algorithms. Research results indicated that creating high-temporal resolution Normalized Difference Vegetation Index (NDVI) composites from NASA's MODerate Resolution Imaging Spectroradiometer (MODIS) data products provides useful input to crop type classifications as well as potential useful input for regional crop productivity modeling efforts. A current NASA-sponsored Rapid Prototyping Capability (RPC) experiment will assess the utility of simulated future Visible Infrared Imager / Radiometer Suite (VIIRS) imagery for conducting NDVI-derived land cover and specific crop type classifications. In the experiment, methods will be considered to refine current MODIS data streams, reduce the noise content of the MODIS, and utilize the MODIS data as an input to the VIIRS simulation process. The effort also is being conducted in concert with an ISS project that will further evaluate, verify and validate the usefulness of specific data products to provide remote sensing-derived input for the Sinclair Model a semi-mechanistic model for estimating crop yield. The study area encompasses a large portion of the Pampas region of Argentina--a major world producer of crops such as corn, soybeans, and wheat which makes it a competitor to the US. ITD partnered with researchers at the Center for Surveying Agricultural and Natural Resources (CREAN) of the National University of Cordoba, Argentina, and CREAN personnel collected and continue to collect field-level, GIS-based in situ information. Current efforts involve both developing and optimizing software tools for the necessary data processing. The software includes the Time Series Product Tool (TSPT), Leica's ERDAS Imagine, and Mississippi State University's Temporal Map Algebra computational tools.

  17. Regression Models for Identifying Noise Sources in Magnetic Resonance Images

    PubMed Central

    Zhu, Hongtu; Li, Yimei; Ibrahim, Joseph G.; Shi, Xiaoyan; An, Hongyu; Chen, Yashen; Gao, Wei; Lin, Weili; Rowe, Daniel B.; Peterson, Bradley S.

    2009-01-01

    Stochastic noise, susceptibility artifacts, magnetic field and radiofrequency inhomogeneities, and other noise components in magnetic resonance images (MRIs) can introduce serious bias into any measurements made with those images. We formally introduce three regression models including a Rician regression model and two associated normal models to characterize stochastic noise in various magnetic resonance imaging modalities, including diffusion-weighted imaging (DWI) and functional MRI (fMRI). Estimation algorithms are introduced to maximize the likelihood function of the three regression models. We also develop a diagnostic procedure for systematically exploring MR images to identify noise components other than simple stochastic noise, and to detect discrepancies between the fitted regression models and MRI data. The diagnostic procedure includes goodness-of-fit statistics, measures of influence, and tools for graphical display. The goodness-of-fit statistics can assess the key assumptions of the three regression models, whereas measures of influence can isolate outliers caused by certain noise components, including motion artifacts. The tools for graphical display permit graphical visualization of the values for the goodness-of-fit statistic and influence measures. Finally, we conduct simulation studies to evaluate performance of these methods, and we analyze a real dataset to illustrate how our diagnostic procedure localizes subtle image artifacts by detecting intravoxel variability that is not captured by the regression models. PMID:19890478

  18. The Sky is for Everyone — Outreach and Education with the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Freistetter, F.; Iafrate, G.; Ramella, M.; Aida-Wp5 Team

    2010-12-01

    The Virtual Observatory (VO) is an international project to collect astronomical data (images, spectra, simulations, mission-logs, etc.), organise them and develop tools that let astronomers access this huge amount of information. The VO not only simplifies the work of professional astronomers, it is also a valuable tool for education and public outreach. For teachers and astronomers who actively promote astronomy to the public, the VO is a great opportunity to access and use real astronomical data, and have a taste of the daily life of astronomers.

  19. Towards an acoustic model-based poroelastic imaging method: I. Theoretical foundation.

    PubMed

    Berry, Gearóid P; Bamber, Jeffrey C; Armstrong, Cecil G; Miller, Naomi R; Barbone, Paul E

    2006-04-01

    The ultrasonic measurement and imaging of tissue elasticity is currently under wide investigation and development as a clinical tool for the assessment of a broad range of diseases, but little account in this field has yet been taken of the fact that soft tissue is porous and contains mobile fluid. The ability to squeeze fluid out of tissue may have implications for conventional elasticity imaging, and may present opportunities for new investigative tools. When a homogeneous, isotropic, fluid-saturated poroelastic material with a linearly elastic solid phase and incompressible solid and fluid constituents is subjected to stress, the behaviour of the induced internal strain field is influenced by three material constants: the Young's modulus (E(s)) and Poisson's ratio (nu(s)) of the solid matrix and the permeability (k) of the solid matrix to the pore fluid. New analytical expressions were derived and used to model the time-dependent behaviour of the strain field inside simulated homogeneous cylindrical samples of such a poroelastic material undergoing sustained unconfined compression. A model-based reconstruction technique was developed to produce images of parameters related to the poroelastic material constants (E(s), nu(s), k) from a comparison of the measured and predicted time-dependent spatially varying radial strain. Tests of the method using simulated noisy strain data showed that it is capable of producing three unique parametric images: an image of the Poisson's ratio of the solid matrix, an image of the axial strain (which was not time-dependent subsequent to the application of the compression) and an image representing the product of the aggregate modulus E(s)(1-nu(s))/(1+nu(s))(1-2nu(s)) of the solid matrix and the permeability of the solid matrix to the pore fluid. The analytical expressions were further used to numerically validate a finite element model and to clarify previous work on poroelastography.

  20. Building cell models and simulations from microscope images.

    PubMed

    Murphy, Robert F

    2016-03-01

    The use of fluorescence microscopy has undergone a major revolution over the past twenty years, both with the development of dramatic new technologies and with the widespread adoption of image analysis and machine learning methods. Many open source software tools provide the ability to use these methods in a wide range of studies, and many molecular and cellular phenotypes can now be automatically distinguished. This article presents the next major challenge in microscopy automation, the creation of accurate models of cell organization directly from images, and reviews the progress that has been made towards this challenge. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. High-NA metrology and sensing on Berkeley MET5

    NASA Astrophysics Data System (ADS)

    Miyakawa, Ryan; Anderson, Chris; Naulleau, Patrick

    2017-03-01

    In this paper we compare two non-interferometric wavefront sensors suitable for in-situ high-NA EUV optical testing. The first is the AIS sensor, which has been deployed in both inspection and exposure tools. AIS is a compact, optical test that directly measures a wavefront by probing various parts of the imaging optic pupil and measuring localized wavefront curvature. The second is an image-based technique that uses an iterative algorithm based on simulated annealing to reconstruct a wavefront based on matching aerial images through focus. In this technique, customized illumination is used to probe the pupil at specific points to optimize differences in aberration signatures.

  2. 50 Years of Army Computing From ENIAC to MSRC

    DTIC Science & Technology

    2000-09-01

    processing capability. The scientifi c visualization program was started in 1984 to provide tools and expertise to help researchers graphically...and materials, forces modeling, nanoelectronics, electromagnetics and acoustics, signal image processing , and simulation and modeling. The ARL...mechanical and electrical calculating equipment, punch card data processing equipment, analog computers, and early digital machines. Before beginning, we

  3. TU-EF-204-07: Add Tube Current Modulation to a Low Dose Simulation Tool for CT Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Y.; Department of Physics, University of Arizona, Tucson, AZ; Wen, G.

    2015-06-15

    Purpose: We extended the capabilities of a low dose simulation tool to model Tube-Current Modulation (TCM). TCM is widely used in clinical practice to reduce radiation dose in CT scans. We expect the tool to be valuable for various clinical applications (e.g., optimize protocols, compare reconstruction techniques and evaluate TCM methods). Methods: The tube current is input as a function of z location, instead of a fixed value. Starting from the line integrals of a scan, a new Poisson noise realization at a lower dose is generated for each view. To validate the new functionality, we compared simulated scans withmore » real scans in image space. Results: First we assessed noise in the difference between the low-dose simulations and the original high-dose scan. When the simulated tube current is a step function of z location, the noise at each segment matches the noise of 3 separate constant-tube-current-simulations. Secondly, with a phantom that forces TCM, we compared a low-dose simulation with an equivalent real low-dose scan. The mean CT number of the simulated scan and the real low-dose scan were 137.7±0.6 and 137.8±0.5 respectively. Furthermore, with 240 ROIs, the noise of the simulated scan and the real low-dose scan were 24.03±0.45 and 23.99±0.43 respectively, and they were not statistically different (2-sample t-test, p-value=0.28). The facts that the noise reflected the trend of the TCM curve, and that the absolute noise measurements were not statistically different validated the TCM function. Conclusion: We successfully added tube-current modulation functionality in an existing low dose simulation tool. We demonstrated that the noise reflected an input tube-current modulation curve. In addition, we verified that the noise and mean CT number of our simulation agreed with a real low dose scan. The authors are all employees of Philips. Yijun Ding is also supported by NIBIB P41EB002035 and NIBIB R01EB000803.« less

  4. Improving the quality of transvaginal ultrasound scan by simulation training for general practice residents.

    PubMed

    Le Lous, M; De Chanaud, N; Bourret, A; Senat, M V; Colmant, C; Jaury, P; Tesnière, A; Tsatsaris, V

    2017-01-01

    Ultrasonography (US) is an essential tool for the diagnosis of acute gynecological conditions. General practice (GP) residents are involved in the first-line management of gynecologic emergencies. They are not familiar with US equipment. Initial training on simulators was conducted.The aim of this study was to evaluate the impact of simulation-based training on the quality of the sonographic images achieved by GP residents 2 months after the simulation training versus clinical training alone. Young GP residents assigned to emergency gynecology departments were invited to a one-day simulation-based US training session. A prospective controlled trial aiming to assess the impact of such training on TVS (transvaginal ultrasound scan) image quality was conducted. The first group included GP residents who attended the simulation training course. The second group included GP residents who did not attend the course. Written consent to participate was obtained from all participants. Images achieved 2 months after the training were scored using standardized quality criteria and compared in both groups. The stress generated by this examination was also assessed with a simple numeric scale. A total of 137 residents attended the simulation training, 26 consented to participate in the controlled trial. Sonographic image quality was significantly better in the simulation group for the sagittal view of the uterus (3.6 vs 2.7, p  = 0.01), for the longitudinal view of the right ovary (2.8 vs 1.4, p  = 0.027), and for the Morrison space (1.7 vs 0.4, p  = 0.034), but the difference was not significant for the left ovary (2.9 vs 1.7, p  = 0.189). The stress generated by TVS after 2 months was not different between the groups (6.0 vs 4.8, p  = 0.4). Simulation-based training improved the quality of pelvic US images in GP residents assessed after 2 months of experience in gynecology compared to clinical training alone.

  5. Interactive 2D to 3D stereoscopic image synthesis

    NASA Astrophysics Data System (ADS)

    Feldman, Mark H.; Lipton, Lenny

    2005-03-01

    Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.

  6. Predictive images of postoperative levator resection outcome using image processing software.

    PubMed

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller's muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop ® ). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery.

  7. Predictive images of postoperative levator resection outcome using image processing software

    PubMed Central

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    Purpose This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Methods Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Results Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Conclusion Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. PMID:27757008

  8. Enhancing 4D PC-MRI in an aortic phantom considering numerical simulations

    NASA Astrophysics Data System (ADS)

    Kratzke, Jonas; Schoch, Nicolai; Weis, Christian; Müller-Eschner, Matthias; Speidel, Stefanie; Farag, Mina; Beller, Carsten J.; Heuveline, Vincent

    2015-03-01

    To date, cardiovascular surgery enables the treatment of a wide range of aortic pathologies. One of the current challenges in this field is given by the detection of high-risk patients for adverse aortic events, who should be treated electively. Reliable diagnostic parameters, which indicate the urge of treatment, have to be determined. Functional imaging by means of 4D phase contrast-magnetic resonance imaging (PC-MRI) enables the time-resolved measurement of blood flow velocity in 3D. Applied to aortic phantoms, three dimensional blood flow properties and their relation to adverse dynamics can be investigated in vitro. Emerging "in silico" methods of numerical simulation can supplement these measurements in computing additional information on crucial parameters. We propose a framework that complements 4D PC-MRI imaging by means of numerical simulation based on the Finite Element Method (FEM). The framework is developed on the basis of a prototypic aortic phantom and validated by 4D PC-MRI measurements of the phantom. Based on physical principles of biomechanics, the derived simulation depicts aortic blood flow properties and characteristics. The framework might help identifying factors that induce aortic pathologies such as aortic dilatation or aortic dissection. Alarming thresholds of parameters such as wall shear stress distribution can be evaluated. The combined techniques of 4D PC-MRI and numerical simulation can be used as complementary tools for risk-stratification of aortic pathology.

  9. Electron tomography simulator with realistic 3D phantom for evaluation of acquisition, alignment and reconstruction methods.

    PubMed

    Wan, Xiaohua; Katchalski, Tsvi; Churas, Christopher; Ghosh, Sreya; Phan, Sebastien; Lawrence, Albert; Hao, Yu; Zhou, Ziying; Chen, Ruijuan; Chen, Yu; Zhang, Fa; Ellisman, Mark H

    2017-05-01

    Because of the significance of electron microscope tomography in the investigation of biological structure at nanometer scales, ongoing improvement efforts have been continuous over recent years. This is particularly true in the case of software developments. Nevertheless, verification of improvements delivered by new algorithms and software remains difficult. Current analysis tools do not provide adaptable and consistent methods for quality assessment. This is particularly true with images of biological samples, due to image complexity, variability, low contrast and noise. We report an electron tomography (ET) simulator with accurate ray optics modeling of image formation that includes curvilinear trajectories through the sample, warping of the sample and noise. As a demonstration of the utility of our approach, we have concentrated on providing verification of the class of reconstruction methods applicable to wide field images of stained plastic-embedded samples. Accordingly, we have also constructed digital phantoms derived from serial block face scanning electron microscope images. These phantoms are also easily modified to include alignment features to test alignment algorithms. The combination of more realistic phantoms with more faithful simulations facilitates objective comparison of acquisition parameters, alignment and reconstruction algorithms and their range of applicability. With proper phantoms, this approach can also be modified to include more complex optical models, including distance-dependent blurring and phase contrast functions, such as may occur in cryotomography. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Digimouse: a 3D whole body mouse atlas from CT and cryosection data

    PubMed Central

    Dogdas, Belma; Stout, David; Chatziioannou, Arion F; Leahy, Richard M

    2010-01-01

    We have constructed a three-dimensional (3D) whole body mouse atlas from coregistered x-ray CT and cryosection data of a normal nude male mouse. High quality PET, x-ray CT and cryosection images were acquired post mortem from a single mouse placed in a stereotactic frame with fiducial markers visible in all three modalities. The image data were coregistered to a common coordinate system using the fiducials and resampled to an isotropic 0.1 mm voxel size. Using interactive editing tools we segmented and labelled whole brain, cerebrum, cerebellum, olfactory bulbs, striatum, medulla, masseter muscles, eyes, lachrymal glands, heart, lungs, liver, stomach, spleen, pancreas, adrenal glands, kidneys, testes, bladder, skeleton and skin surface. The final atlas consists of the 3D volume, in which the voxels are labelled to define the anatomical structures listed above, with coregistered PET, x-ray CT and cryosection images. To illustrate use of the atlas we include simulations of 3D bioluminescence and PET image reconstruction. Optical scatter and absorption values are assigned to each organ to simulate realistic photon transport within the animal for bioluminescence imaging. Similarly, 511 keV photon attenuation values are assigned to each structure in the atlas to simulate realistic photon attenuation in PET. The Digimouse atlas and data are available at http://neuroimage.usc.edu/Digimouse.html. PMID:17228106

  11. [Activities of Center for Nondestructive Evaluation, Iowa State University

    NASA Technical Reports Server (NTRS)

    Gray, Joe

    2002-01-01

    The final report of NASA funded activities at Iowa State University (ISU) for the period between 1/96 and 1/99 includes two main areas of activity. The first is the development and delivery of an x-ray simulation package suitable for evaluating the impact of parameters affects the inspectability of an assembly of parts. The second area was the development of images processing tools to remove reconstruction artifacts in x-ray laminagraphy images. The x-ray simulation portion of this work was done by J. Gray and the x-ray laminagraphy work was done by J. Basart. The report is divided into two sections covering the two activities respectively. In addition to this work reported the funding also covered NASA's membership in the NSF University/Industrial Cooperative Research Center.

  12. Finite-element-based matching of pre- and intraoperative data for image-guided endovascular aneurysm repair

    PubMed Central

    Dumenil, Aurélien; Kaladji, Adrien; Castro, Miguel; Esneault, Simon; Lucas, Antoine; Rochette, Michel; Goksu, Cemil; Haigron, Pascal

    2013-01-01

    Endovascular repair of abdominal aortic aneurysms is a well-established technique throughout the medical and surgical communities. Although increasingly indicated, this technique does have some limitations. Because intervention is commonly performed under fluoroscopic control, two-dimensional (2D) visualization of the aneurysm requires the injection of a contrast agent. The projective nature of this imaging modality inevitably leads to topographic errors, and does not give information on arterial wall quality at the time of deployment. A specially-adapted intraoperative navigation interface could increase deployment accuracy and reveal such information, which preoperative three-dimensional (3D) imaging might otherwise provide. One difficulty is the precise matching of preoperative data (images and models) and intraoperative observations affected by anatomical deformations due to tool-tissue interactions. Our proposed solution involves a finite element-based preoperative simulation of tool/tissue interactions, its adaptive tuning regarding patient specific data, and the matching with intra-operative data. The biomechanical model was first tuned on a group of 10 patients and assessed on a second group of 8 patients. PMID:23269745

  13. Improved image guidance technique for minimally invasive mitral valve repair using real-time tracked 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Rankin, Adam; Moore, John; Bainbridge, Daniel; Peters, Terry

    2016-03-01

    In the past ten years, numerous new surgical and interventional techniques have been developed for treating heart valve disease without the need for cardiopulmonary bypass. Heart valve repair is now being performed in a blood-filled environment, reinforcing the need for accurate and intuitive imaging techniques. Previous work has demonstrated how augmenting ultrasound with virtual representations of specific anatomical landmarks can greatly simplify interventional navigation challenges and increase patient safety. These techniques often complicate interventions by requiring additional steps taken to manually define and initialize virtual models. Furthermore, overlaying virtual elements into real-time image data can also obstruct the view of salient image information. To address these limitations, a system was developed that uses real-time volumetric ultrasound alongside magnetically tracked tools presented in an augmented virtuality environment to provide a streamlined navigation guidance platform. In phantom studies simulating a beating-heart navigation task, procedure duration and tool path metrics have achieved comparable performance to previous work in augmented virtuality techniques, and considerable improvement over standard of care ultrasound guidance.

  14. Image-guided robotic surgery.

    PubMed

    Marescaux, Jacques; Solerc, Luc

    2004-06-01

    Medical image processing leads to an improvement in patient care by guiding the surgical gesture. Three-dimensional models of patients that are generated from computed tomographic scans or magnetic resonance imaging allow improved surgical planning and surgical simulation that offers the opportunity for a surgeon to train the surgical gesture before performing it for real. These two preoperative steps can be used intra-operatively because of the development of augmented reality, which consists of superimposing the preoperative three-dimensional model of the patient onto the real intraoperative view. Augmented reality provides the surgeon with a view of the patient in transparency and can also guide the surgeon, thanks to the real-time tracking of surgical tools during the procedure. When adapted to robotic surgery, this tool tracking enables visual serving with the ability to automatically position and control surgical robotic arms in three dimensions. It is also now possible to filter physiologic movements such as breathing or the heart beat. In the future, by combining augmented reality and robotics, these image-guided robotic systems will enable automation of the surgical procedure, which will be the next revolution in surgery.

  15. Characterization of transport phenomena in porous transport layers using X-ray microtomography

    NASA Astrophysics Data System (ADS)

    Hasanpour, S.; Hoorfar, M.; Phillion, A. B.

    2017-06-01

    Among different methods available for estimating the transport properties of porous transport layers (PTLs) of polymer electrolyte membrane fuel cells, X-ray micro computed tomography (X-μCT) imaging in combination with image-based numerical simulation has been recognized as a viable tool. In this study, four commercially-available single-layer and dual-layer PTLs are analyzed using this method in order to compare and contrast transport properties between different PTLs, as well as the variability within a single sheet. Complete transport property datasets are created for each PTL. The simulation predictions indicate that PTLs with high porosity show considerable variability in permeability and effective diffusivity, while PTLs with low porosity do not. Furthermore, it is seen that the Tomadakis-Sotirchos (TS) analytical expressions for porous media match the image-based simulations when porosity is relatively low but predict higher permeability and effective diffusivity for porosity values greater than 80%. Finally, the simulations show that cracks within MPL of dual-layer PTLs have a significant effect on the overall permeability and effective diffusivity of the PTLs. This must be considered when estimating the transport properties of dual-layer PTLs. These findings can be used to improve macro-scale models of product and reactant transport within fuel cells, and ultimately, fuel cell efficiency.

  16. Determining skeletal muscle architecture with Laplacian simulations: a comparison with diffusion tensor imaging.

    PubMed

    Handsfield, Geoffrey G; Bolsterlee, Bart; Inouye, Joshua M; Herbert, Robert D; Besier, Thor F; Fernandez, Justin W

    2017-12-01

    Determination of skeletal muscle architecture is important for accurately modeling muscle behavior. Current methods for 3D muscle architecture determination can be costly and time-consuming, making them prohibitive for clinical or modeling applications. Computational approaches such as Laplacian flow simulations can estimate muscle fascicle orientation based on muscle shape and aponeurosis location. The accuracy of this approach is unknown, however, since it has not been validated against other standards for muscle architecture determination. In this study, muscle architectures from the Laplacian approach were compared to those determined from diffusion tensor imaging in eight adult medial gastrocnemius muscles. The datasets were subdivided into training and validation sets, and computational fluid dynamics software was used to conduct Laplacian simulations. In training sets, inputs of muscle geometry, aponeurosis location, and geometric flow guides resulted in good agreement between methods. Application of the method to validation sets showed no significant differences in pennation angle (mean difference [Formula: see text] or fascicle length (mean difference 0.9 mm). Laplacian simulation was thus effective at predicting gastrocnemius muscle architectures in healthy volunteers using imaging-derived muscle shape and aponeurosis locations. This method may serve as a tool for determining muscle architecture in silico and as a complement to other approaches.

  17. What can we learn from in-soil imaging of a live plant: X-ray Computed Tomography and 3D numerical simulation of root-soil system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaofan; Varga, Tamas; Liu, Chongxuan

    Plant roots play a critical role in plant-soil-microbe interactions that occur in the rhizosphere, as well as processes with important implications to farming, forest management and climate change. X-ray computed tomography (XCT) has been proven to be an effective tool for non-invasive root imaging and analysis. A combination of XCT, open-source software, and our own code was used to noninvasively image a prairie dropseed (Sporobolus heterolepis) specimen, segment the root data to obtain a 3D image of the root structure at 31µm resolution, and extract quantitative information (root volume and surface area) from the 3D data, respectively. Based on themore » mesh generated from the root structure, computational fluid dynamics (CFD) simulations were applied to numerically investigate the root-soil-groundwater system. The plant root conductivity, soil hydraulic conductivity and transpiration rate were shown to control the groundwater distribution. The flow variability and soil water distributions under different scenarios were investigated. Parameterizations were evaluated to show their impacts on the average conductivity. The pore-scale modeling approach provides realistic simulations of rhizosphere flow processes and provides useful information that can be linked to upscaled models.« less

  18. A gamma beam profile imager for ELI-NP Gamma Beam System

    NASA Astrophysics Data System (ADS)

    Cardarelli, P.; Paternò, G.; Di Domenico, G.; Consoli, E.; Marziani, M.; Andreotti, M.; Evangelisti, F.; Squerzanti, S.; Gambaccini, M.; Albergo, S.; Cappello, G.; Tricomi, A.; Veltri, M.; Adriani, O.; Borgheresi, R.; Graziani, G.; Passaleva, G.; Serban, A.; Starodubtsev, O.; Variola, A.; Palumbo, L.

    2018-06-01

    The Gamma Beam System of ELI-Nuclear Physics is a high brilliance monochromatic gamma source based on the inverse Compton interaction between an intense high power laser and a bright electron beam with tunable energy. The source, currently being assembled in Magurele (Romania), is designed to provide a beam with tunable average energy ranging from 0.2 to 19.5 MeV, rms energy bandwidth down to 0.5% and flux of about 108 photons/s. The system includes a set of detectors for the diagnostic and complete characterization of the gamma beam. To evaluate the spatial distribution of the beam a gamma beam profile imager is required. For this purpose, a detector based on a scintillator target coupled to a CCD camera was designed and a prototype was tested at INFN-Ferrara laboratories. A set of analytical calculations and Monte Carlo simulations were carried out to optimize the imager design and evaluate the performance expected with ELI-NP gamma beam. In this work the design of the imager is described in detail, as well as the simulation tools used and the results obtained. The simulation parameters were tuned and cross-checked with the experimental measurements carried out on the assembled prototype using the beam from an x-ray tube.

  19. Accounting for aquifer heterogeneity from geological data to management tools.

    PubMed

    Blouin, Martin; Martel, Richard; Gloaguen, Erwan

    2013-01-01

    A nested workflow of multiple-point geostatistics (MPG) and sequential Gaussian simulation (SGS) was tested on a study area of 6 km(2) located about 20 km northwest of Quebec City, Canada. In order to assess its geological and hydrogeological parameter heterogeneity and to provide tools to evaluate uncertainties in aquifer management, direct and indirect field measurements are used as inputs in the geostatistical simulations to reproduce large and small-scale heterogeneities. To do so, the lithological information is first associated to equivalent hydrogeological facies (hydrofacies) according to hydraulic properties measured at several wells. Then, heterogeneous hydrofacies (HF) realizations are generated using a prior geological model as training image (TI) with the MPG algorithm. The hydraulic conductivity (K) heterogeneity modeling within each HF is finally computed using SGS algorithm. Different K models are integrated in a finite-element hydrogeological model to calculate multiple transport simulations. Different scenarios exhibit variations in mass transport path and dispersion associated with the large- and small-scale heterogeneity respectively. Three-dimensional maps showing the probability of overpassing different thresholds are presented as examples of management tools. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  20. Iterative method for in situ measurement of lens aberrations in lithographic tools using CTC-based quadratic aberration model.

    PubMed

    Liu, Shiyuan; Xu, Shuang; Wu, Xiaofei; Liu, Wei

    2012-06-18

    This paper proposes an iterative method for in situ lens aberration measurement in lithographic tools based on a quadratic aberration model (QAM) that is a natural extension of the linear model formed by taking into account interactions among individual Zernike coefficients. By introducing a generalized operator named cross triple correlation (CTC), the quadratic model can be calculated very quickly and accurately with the help of fast Fourier transform (FFT). The Zernike coefficients up to the 37th order or even higher are determined by solving an inverse problem through an iterative procedure from several through-focus aerial images of a specially designed mask pattern. The simulation work has validated the theoretical derivation and confirms that such a method is simple to implement and yields a superior quality of wavefront estimate, particularly for the case when the aberrations are relatively large. It is fully expected that this method will provide a useful practical means for the in-line monitoring of the imaging quality of lithographic tools.

  1. A framework for optimizing micro-CT in dual-modality micro-CT/XFCT small-animal imaging system

    NASA Astrophysics Data System (ADS)

    Vedantham, Srinivasan; Shrestha, Suman; Karellas, Andrew; Cho, Sang Hyun

    2017-09-01

    Dual-modality Computed Tomography (CT)/X-ray Fluorescence Computed Tomography (XFCT) can be a valuable tool for imaging and quantifying the organ and tissue distribution of small concentrations of high atomic number materials in small-animal system. In this work, the framework for optimizing the micro-CT imaging system component of the dual-modality system is described, either when the micro-CT images are concurrently acquired with XFCT and using the x-ray spectral conditions for XFCT, or when the micro-CT images are acquired sequentially and independently of XFCT. This framework utilizes the cascaded systems analysis for task-specific determination of the detectability index using numerical observer models at a given radiation dose, where the radiation dose is determined using Monte Carlo simulations.

  2. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images.

    PubMed

    Elad, M; Feuer, A

    1997-01-01

    The three main tools in the single image restoration theory are the maximum likelihood (ML) estimator, the maximum a posteriori probability (MAP) estimator, and the set theoretic approach using projection onto convex sets (POCS). This paper utilizes the above known tools to propose a unified methodology toward the more complicated problem of superresolution restoration. In the superresolution restoration problem, an improved resolution image is restored from several geometrically warped, blurred, noisy and downsampled measured images. The superresolution restoration problem is modeled and analyzed from the ML, the MAP, and POCS points of view, yielding a generalization of the known superresolution restoration methods. The proposed restoration approach is general but assumes explicit knowledge of the linear space- and time-variant blur, the (additive Gaussian) noise, the different measured resolutions, and the (smooth) motion characteristics. A hybrid method combining the simplicity of the ML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches. The hybrid method is shown to converge to the unique optimal solution of a new definition of the optimization problem. Superresolution restoration from motionless measurements is also discussed. Simulations demonstrate the power of the proposed methodology.

  3. Simulation in Otolaryngology: A teaching and training tool.

    PubMed

    Thone, Natalie; Winter, Matías; García-Matte, Raimundo J; González, Claudia

    Simulation in medical education is an effective method of teaching and learning, allowing standardisation of the learning and teaching processes without compromising the patient. Different types of simulation exist within subspecialty areas of Otolaryngology. Models that have been developed include phantom imaging, dummy patients, virtual models and animal models that are used to teach and practice different skills. Each model has advantages and disadvantages, where virtual reality is an emerging model with a promising future. However, there is still a need for further development of simulation in the area of Otolaryngology. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.

  4. ImagingReso: A Tool for Neutron Resonance Imaging

    DOE PAGES

    Zhang, Yuxuan; Bilheux, Jean -Christophe

    2017-11-01

    ImagingReso is an open-source Python library that simulates the neutron resonance signal for neutron imaging measurements. By defining the sample information such as density, thickness in the neutron path, and isotopic ratios of the elemental composition of the material, this package plots the expected resonance peaks for a selected neutron energy range. Various sample types such as layers of single elements (Ag, Co, etc. in solid form), chemical compounds (UO 3, Gd 2O 3, etc.), or even multiple layers of both types can be plotted with this package. As a result, major plotting features include display of the transmission/attenuation inmore » wavelength, energy, and time scale, and show/hide elemental and isotopic contributions in the total resonance signal.« less

  5. Simulation of imperfections in plastic lenses - transferring local refractive index changes into surface shape modifications

    NASA Astrophysics Data System (ADS)

    Arasa, Josep; Pizarro, Carles; Blanco, Patricia

    2016-06-01

    Injection molded plastic lenses have continuously improved their performance regarding optical quality and nowadays are as usual as glass lenses in image forming devices. However, during the manufacturing process unavoidable fluctuations in material density occur, resulting in local changes in the distribution of refractive index, which degrade the imaging properties of the polymer lens. Such material density fluctuations correlate to phase delays, which opens a path for their mapping. However, it is difficult to transfer the measured variations in refractive index into conventional optical simulation tool. Thus, we propose a method to convert the local variations in refractive index into local changes of one surface of the lens, which can then be described as a free-form surface, easy to introduce in conventional simulation tools. The proposed method was tested on a commercial gradient index (GRIN) lens for a set of six different object positions, using the MTF sagittal and tangential cuts to compare the differences between the real lens and a lens with homogenous refractive index, and the last surface converted into a free-form shape containing the internal refractive index changes. The same procedure was used to reproduce the local refractive index changes of an injected plastic lens with local index changes measured using an in-house built polariscopic arrangement, showing the capability of the method to provide successful results.

  6. Use of simulated experiments for material characterization of brittle materials subjected to high strain rate dynamic tension

    PubMed Central

    Saletti, Dominique

    2017-01-01

    Rapid progress in ultra-high-speed imaging has allowed material properties to be studied at high strain rates by applying full-field measurements and inverse identification methods. Nevertheless, the sensitivity of these techniques still requires a better understanding, since various extrinsic factors present during an actual experiment make it difficult to separate different sources of errors that can significantly affect the quality of the identified results. This study presents a methodology using simulated experiments to investigate the accuracy of the so-called spalling technique (used to study tensile properties of concrete subjected to high strain rates) by numerically simulating the entire identification process. The experimental technique uses the virtual fields method and the grid method. The methodology consists of reproducing the recording process of an ultra-high-speed camera by generating sequences of synthetically deformed images of a sample surface, which are then analysed using the standard tools. The investigation of the uncertainty of the identified parameters, such as Young's modulus along with the stress–strain constitutive response, is addressed by introducing the most significant user-dependent parameters (i.e. acquisition speed, camera dynamic range, grid sampling, blurring), proving that the used technique can be an effective tool for error investigation. This article is part of the themed issue ‘Experimental testing and modelling of brittle materials at high strain rates’. PMID:27956505

  7. Calibrated thermal microscopy of the tool-chip interface in machining

    NASA Astrophysics Data System (ADS)

    Yoon, Howard W.; Davies, Matthew A.; Burns, Timothy J.; Kennedy, M. D.

    2000-03-01

    A critical parameter in predicting tool wear during machining and in accurate computer simulations of machining is the spatially-resolved temperature at the tool-chip interface. We describe the development and the calibration of a nearly diffraction-limited thermal-imaging microscope to measure the spatially-resolved temperatures during the machining of an AISI 1045 steel with a tungsten-carbide tool bit. The microscope has a target area of 0.5 mm X 0.5 mm square region with a < 5 micrometers spatial resolution and is based on a commercial InSb 128 X 128 focal plane array with an all reflective microscope objective. The minimum frame image acquisition time is < 1 ms. The microscope is calibrated using a standard blackbody source from the radiance temperature calibration laboratory at the National Institute of Standards and Technology, and the emissivity of the machined material is deduced from the infrared reflectivity measurements. The steady-state thermal images from the machining of 1045 steel are compared to previous determinations of tool temperatures from micro-hardness measurements and are found to be in agreement with those studies. The measured average chip temperatures are also in agreement with the temperature rise estimated from energy balance considerations. From these calculations and the agreement between the experimental and the calculated determinations of the emissivity of the 1045 steel, the standard uncertainty of the temperature measurements is estimated to be about 45 degree(s)C at 900 degree(s)C.

  8. GATE: a simulation toolkit for PET and SPECT.

    PubMed

    Jan, S; Santin, G; Strul, D; Staelens, S; Assié, K; Autret, D; Avner, S; Barbier, R; Bardiès, M; Bloomfield, P M; Brasse, D; Breton, V; Bruyndonckx, P; Buvat, I; Chatziioannou, A F; Choi, Y; Chung, Y H; Comtat, C; Donnarieix, D; Ferrer, L; Glick, S J; Groiselle, C J; Guez, D; Honore, P F; Kerhoas-Cavata, S; Kirov, A S; Kohli, V; Koole, M; Krieguer, M; van der Laan, D J; Lamare, F; Largeron, G; Lartizien, C; Lazaro, D; Maas, M C; Maigne, L; Mayet, F; Melot, F; Merheb, C; Pennacchio, E; Perez, J; Pietrzyk, U; Rannou, F R; Rey, M; Schaart, D R; Schmidtlein, C R; Simon, L; Song, T Y; Vieira, J M; Visvikis, D; Van de Walle, R; Wieërs, E; Morel, C

    2004-10-07

    Monte Carlo simulation is an essential tool in emission tomography that can assist in the design of new medical imaging devices, the optimization of acquisition protocols and the development or assessment of image reconstruction algorithms and correction techniques. GATE, the Geant4 Application for Tomographic Emission, encapsulates the Geant4 libraries to achieve a modular, versatile, scripted simulation toolkit adapted to the field of nuclear medicine. In particular, GATE allows the description of time-dependent phenomena such as source or detector movement, and source decay kinetics. This feature makes it possible to simulate time curves under realistic acquisition conditions and to test dynamic reconstruction algorithms. This paper gives a detailed description of the design and development of GATE by the OpenGATE collaboration, whose continuing objective is to improve, document and validate GATE by simulating commercially available imaging systems for PET and SPECT. Large effort is also invested in the ability and the flexibility to model novel detection systems or systems still under design. A public release of GATE licensed under the GNU Lesser General Public License can be downloaded at http:/www-lphe.epfl.ch/GATE/. Two benchmarks developed for PET and SPECT to test the installation of GATE and to serve as a tutorial for the users are presented. Extensive validation of the GATE simulation platform has been started, comparing simulations and measurements on commercially available acquisition systems. References to those results are listed. The future prospects towards the gridification of GATE and its extension to other domains such as dosimetry are also discussed.

  9. GATE - Geant4 Application for Tomographic Emission: a simulation toolkit for PET and SPECT

    PubMed Central

    Jan, S.; Santin, G.; Strul, D.; Staelens, S.; Assié, K.; Autret, D.; Avner, S.; Barbier, R.; Bardiès, M.; Bloomfield, P. M.; Brasse, D.; Breton, V.; Bruyndonckx, P.; Buvat, I.; Chatziioannou, A. F.; Choi, Y.; Chung, Y. H.; Comtat, C.; Donnarieix, D.; Ferrer, L.; Glick, S. J.; Groiselle, C. J.; Guez, D.; Honore, P.-F.; Kerhoas-Cavata, S.; Kirov, A. S.; Kohli, V.; Koole, M.; Krieguer, M.; van der Laan, D. J.; Lamare, F.; Largeron, G.; Lartizien, C.; Lazaro, D.; Maas, M. C.; Maigne, L.; Mayet, F.; Melot, F.; Merheb, C.; Pennacchio, E.; Perez, J.; Pietrzyk, U.; Rannou, F. R.; Rey, M.; Schaart, D. R.; Schmidtlein, C. R.; Simon, L.; Song, T. Y.; Vieira, J.-M.; Visvikis, D.; Van de Walle, R.; Wieërs, E.; Morel, C.

    2012-01-01

    Monte Carlo simulation is an essential tool in emission tomography that can assist in the design of new medical imaging devices, the optimization of acquisition protocols, and the development or assessment of image reconstruction algorithms and correction techniques. GATE, the Geant4 Application for Tomographic Emission, encapsulates the Geant4 libraries to achieve a modular, versatile, scripted simulation toolkit adapted to the field of nuclear medicine. In particular, GATE allows the description of time-dependent phenomena such as source or detector movement, and source decay kinetics. This feature makes it possible to simulate time curves under realistic acquisition conditions and to test dynamic reconstruction algorithms. This paper gives a detailed description of the design and development of GATE by the OpenGATE collaboration, whose continuing objective is to improve, document, and validate GATE by simulating commercially available imaging systems for PET and SPECT. Large effort is also invested in the ability and the flexibility to model novel detection systems or systems still under design. A public release of GATE licensed under the GNU Lesser General Public License can be downloaded at the address http://www-lphe.ep.ch/GATE/. Two benchmarks developed for PET and SPECT to test the installation of GATE and to serve as a tutorial for the users are presented. Extensive validation of the GATE simulation platform has been started, comparing simulations and measurements on commercially available acquisition systems. References to those results are listed. The future prospects toward the gridification of GATE and its extension to other domains such as dosimetry are also discussed. PMID:15552416

  10. Bone cartilage imaging with x-ray interferometry using a practical x-ray tube

    NASA Astrophysics Data System (ADS)

    Kido, Kazuhiro; Makifuchi, Chiho; Kiyohara, Junko; Itou, Tsukasa; Honda, Chika; Momose, Atsushi

    2010-04-01

    The purpose of this study was to design an X-ray Talbot-Lau interferometer for the imaging of bone cartilage using a practical X-ray tube and to develop that imaging system for clinical use. Wave-optics simulation was performed to design the interferometer with a practical X-ray tube, a source grating, two X-ray gratings, and an X-ray detector. An imaging system was created based on the results of the simulation. The specifications were as follows: the focal spot size was 0.3 mm of an X-ray tube with a tungsten anode (Toshiba, Tokyo, Japan). The tube voltage was set at 40 kVp with an additive aluminum filter, and the mean energy was 31 keV. The pixel size of the X-ray detector, a Condor 486 (Fairchild Imaging, California, USA), was 15 μm. The second grating was a Ronchi-type grating whose pitch was 5.3 μm. Imaging performance of the system was examined with X-ray doses of 0.5, 3 and 9 mGy so that the bone cartilage of a chicken wing was clearly depicted with X-ray doses of 3 and 9 mGy. This was consistent with the simulation's predictions. The results suggest that X-ray Talbot-Lau interferometry would be a promising tool in detecting soft tissues in the human body such as bone cartilage for the X-ray image diagnosis of rheumatoid arthritis. Further optimization of the system will follow to reduce the X-ray dose for clinical use.

  11. Antibiofouling polymer coated gold nanoparticles as a dual modal contrast agent for X-ray and photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Huang, Guojia; Yuan, Yi; Xing, Da

    2011-01-01

    X-ray is one of the most useful diagnostic tools in hospitals in terms of frequency of use and cost, while photoacoustic (PA) imaging is a rapidly emerging non-invasive imaging technology that integrates the merits of high optical contrast with high ultrasound resolution. In this study, for the first time, we used gold nanoparticles (GNPs) as a dual modal contrast agent for X-ray and PA imaging. Soft gelatin phantoms with embedded tumor simulators of GNPs in various concentrations are clearly shown in both X-ray and PA imaging. With GNPs as a dual modal contrast agent, X-ray can fast detect the position of tumor and provide morphological information, whereas PA imaging has important potential applications in the image guided therapy of superficial tumors such as breast cancer, melanoma and Merkel cell carcinoma.

  12. Ionospheric Simulation System for Satellite Observations and Global Assimilative Model Experiments - ISOGAME

    NASA Technical Reports Server (NTRS)

    Pi, Xiaoqing; Mannucci, Anthony J.; Verkhoglyadova, Olga; Stephens, Philip; Iijima, Bryron A.

    2013-01-01

    Modeling and imaging the Earth's ionosphere as well as understanding its structures, inhomogeneities, and disturbances is a key part of NASA's Heliophysics Directorate science roadmap. This invention provides a design tool for scientific missions focused on the ionosphere. It is a scientifically important and technologically challenging task to assess the impact of a new observation system quantitatively on our capability of imaging and modeling the ionosphere. This question is often raised whenever a new satellite system is proposed, a new type of data is emerging, or a new modeling technique is developed. The proposed constellation would be part of a new observation system with more low-Earth orbiters tracking more radio occultation signals broadcast by Global Navigation Satellite System (GNSS) than those offered by the current GPS and COSMIC observation system. A simulation system was developed to fulfill this task. The system is composed of a suite of software that combines the Global Assimilative Ionospheric Model (GAIM) including first-principles and empirical ionospheric models, a multiple- dipole geomagnetic field model, data assimilation modules, observation simulator, visualization software, and orbit design, simulation, and optimization software.

  13. Protection heater design validation for the LARP magnets using thermal imaging

    DOE PAGES

    Marchevsky, M.; Turqueti, M.; Cheng, D. W.; ...

    2016-03-16

    Protection heaters are essential elements of a quench protection scheme for high-field accelerator magnets. Various heater designs fabricated by LARP and CERN have been already tested in the LARP high-field quadrupole HQ and presently being built into the coils of the high-field quadrupole MQXF. In order to compare the heat flow characteristics and thermal diffusion timescales of different heater designs, we powered heaters of two different geometries in ambient conditions and imaged the resulting thermal distributions using a high-sensitivity thermal video camera. We observed a peculiar spatial periodicity in the temperature distribution maps potentially linked to the structure of themore » underlying cable. Two-dimensional numerical simulation of heat diffusion and spatial heat distribution have been conducted, and the results of simulation and experiment have been compared. Imaging revealed hot spots due to a current concentration around high curvature points of heater strip of varying cross sections and visualized thermal effects of various interlayer structural defects. Furthermore, thermal imaging can become a future quality control tool for the MQXF coil heaters.« less

  14. Simultaneous radiofrequency (RF) heating and magnetic resonance (MR) thermal mapping using an intravascular MR imaging/RF heating system.

    PubMed

    Qiu, Bensheng; El-Sharkawy, Abdel-Monem; Paliwal, Vaishali; Karmarkar, Parag; Gao, Fabao; Atalar, Ergin; Yang, Xiaoming

    2005-07-01

    Previous studies have confirmed the possibility of using an intravascular MR imaging guidewire (MRIG) as a heating source to enhance vascular gene transfection/expression. This motivated us to develop a new intravascular system that can perform MR imaging, radiofrequncy (RF) heating, and MR temperature monitoring simultaneously in an MR scanner. To validate this concept, a series of mathematical simulations of RF power loss along a 0.032-inch MRIG and RF energy spatial distribution were performed to determine the optimum RF heating frequency. Then, an RF generator/amplifier and a filter box were built. The possibility for simultaneous RF heating and MR thermal mapping of the system was confirmed in vitro using a phantom, and the obtained thermal mapping profile was compared with the simulated RF power distribution. Subsequently, the feasibility of simultaneous RF heating and temperature monitoring was successfully validated in vivo in the aorta of living rabbits. This MR imaging/RF heating system offers a potential tool for intravascular MR-mediated, RF-enhanced vascular gene therapy.

  15. Image Quality Ranking Method for Microscopy

    PubMed Central

    Koho, Sami; Fazeli, Elnaz; Eriksson, John E.; Hänninen, Pekka E.

    2016-01-01

    Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics. PMID:27364703

  16. Computational simulation of breast compression based on segmented breast and fibroglandular tissues on magnetic resonance images.

    PubMed

    Shih, Tzu-Ching; Chen, Jeon-Hor; Liu, Dongxu; Nie, Ke; Sun, Lizhi; Lin, Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying

    2010-07-21

    This study presents a finite element-based computational model to simulate the three-dimensional deformation of a breast and fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and craniocaudal and mediolateral oblique compression, as used in mammography, was applied. The geometry of the whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the nonlinear elastic tissue deformation under compression, using the MSC.Marc software package. The model was tested in four cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these four cases at a compression ratio of 60% was in the range of 5-7 cm, which is a typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at a compression ratio of 60% was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on magnetic resonance imaging (MRI), which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities--such as MRI, mammography, whole breast ultrasound and molecular imaging--that are performed using different body positions and under different compression conditions.

  17. Learning Photogrammetry with Interactive Software Tool PhoX

    NASA Astrophysics Data System (ADS)

    Luhmann, T.

    2016-06-01

    Photogrammetry is a complex topic in high-level university teaching, especially in the fields of geodesy, geoinformatics and metrology where high quality results are demanded. In addition, more and more black-box solutions for 3D image processing and point cloud generation are available that generate nice results easily, e.g. by structure-from-motion approaches. Within this context, the classical approach of teaching photogrammetry (e.g. focusing on aerial stereophotogrammetry) has to be reformed in order to educate students and professionals with new topics and provide them with more information behind the scene. Since around 20 years photogrammetry courses at the Jade University of Applied Sciences in Oldenburg, Germany, include the use of digital photogrammetry software that provide individual exercises, deep analysis of calculation results and a wide range of visualization tools for almost all standard tasks in photogrammetry. During the last years the software package PhoX has been developed that is part of a new didactic concept in photogrammetry and related subjects. It also serves as analysis tool in recent research projects. PhoX consists of a project-oriented data structure for images, image data, measured points and features and 3D objects. It allows for almost all basic photogrammetric measurement tools, image processing, calculation methods, graphical analysis functions, simulations and much more. Students use the program in order to conduct predefined exercises where they have the opportunity to analyse results in a high level of detail. This includes the analysis of statistical quality parameters but also the meaning of transformation parameters, rotation matrices, calibration and orientation data. As one specific advantage, PhoX allows for the interactive modification of single parameters and the direct view of the resulting effect in image or object space.

  18. GATE simulation of a new design of pinhole SPECT system for small animal brain imaging

    NASA Astrophysics Data System (ADS)

    Uzun Ozsahin, D.; Bläckberg, L.; El Fakhri, G.; Sabet, H.

    2017-01-01

    Small animal SPECT imaging has gained an increased interest over the past decade since it is an excellent tool for developing new drugs and tracers. Therefore, there is a huge effort on the development of cost-effective SPECT detectors with high capabilities. The aim of this study is to simulate the performance characteristics of new designs for a cost effective, stationary SPECT system dedicated to small animal imaging with a focus on mice brain. The conceptual design of this SPECT system platform, Stationary Small Animal SSA-SPECT, is to use many pixelated CsI:TI detector modules with 0.4 mm × 0.4 mm pixels in order to achieve excellent intrinsic detector resolution where each module is backed by a single pinhole collimator with 0.3 mm hole diameter. In this work, we present the simulation results of four variations of the SSA-SPECT platform where the number of detector modules and FOV size is varied while keeping the detector size and collimator hole size constant. Using the NEMA NU-4 protocol, we performed spatial resolution, sensitivity, image quality simulations followed by a Derenzo-like phantom evaluation. The results suggest that all four SSA-SPECT systems can provide better than 0.063% system sensitivity and < 1.5 mm FWHM spatial resolution without resolution recovery or other correction techniques. Specifically, SSA-SPECT-1 showed a system sensitivity of 0.09% in combination with 1.1 mm FWHM spatial resolution.

  19. Ionospheric Simulation System for Satellite Observations and Global Assimilative Modeling Experiments (ISOGAME)

    NASA Technical Reports Server (NTRS)

    Pi, Xiaoqing; Mannucci, Anthony J.; Verkhoglyadova, Olga P.; Stephens, Philip; Wilson, Brian D.; Akopian, Vardan; Komjathy, Attila; Lijima, Byron A.

    2013-01-01

    ISOGAME is designed and developed to assess quantitatively the impact of new observation systems on the capability of imaging and modeling the ionosphere. With ISOGAME, one can perform observation system simulation experiments (OSSEs). A typical OSSE using ISOGAME would involve: (1) simulating various ionospheric conditions on global scales; (2) simulating ionospheric measurements made from a constellation of low-Earth-orbiters (LEOs), particularly Global Navigation Satellite System (GNSS) radio occultation data, and from ground-based global GNSS networks; (3) conducting ionospheric data assimilation experiments with the Global Assimilative Ionospheric Model (GAIM); and (4) analyzing modeling results with visualization tools. ISOGAME can provide quantitative assessment of the accuracy of assimilative modeling with the interested observation system. Other observation systems besides those based on GNSS are also possible to analyze. The system is composed of a suite of software that combines the GAIM, including a 4D first-principles ionospheric model and data assimilation modules, an Internal Reference Ionosphere (IRI) model that has been developed by international ionospheric research communities, observation simulator, visualization software, and orbit design, simulation, and optimization software. The core GAIM model used in ISOGAME is based on the GAIM++ code (written in C++) that includes a new high-fidelity geomagnetic field representation (multi-dipole). New visualization tools and analysis algorithms for the OSSEs are now part of ISOGAME.

  20. Application of GIS Rapid Mapping Technology in Disaster Monitoring

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Tu, J.; Liu, G.; Zhao, Q.

    2018-04-01

    With the rapid development of GIS and RS technology, especially in recent years, GIS technology and its software functions have been increasingly mature and enhanced. And with the rapid development of mathematical statistical tools for spatial modeling and simulation, has promoted the widespread application and popularization of quantization in the field of geology. Based on the investigation of field disaster and the construction of spatial database, this paper uses remote sensing image, DEM and GIS technology to obtain the data information of disaster vulnerability analysis, and makes use of the information model to carry out disaster risk assessment mapping.Using ArcGIS software and its spatial data modeling method, the basic data information of the disaster risk mapping process was acquired and processed, and the spatial data simulation tool was used to map the disaster rapidly.

  1. Spectral imaging as a potential tool for optical sentinel lymph node biopsies

    NASA Astrophysics Data System (ADS)

    O'Sullivan, Jack D.; Hoy, Paul R.; Rutt, Harvey N.

    2011-07-01

    Sentinel Lymph Node Biopsy (SLNB) is an increasingly standard procedure to help oncologists accurately stage cancers. It is performed as an alternative to full axillary lymph node dissection in breast cancer patients, reducing the risk of longterm health problems associated with lymph node removal. Intraoperative analysis is currently performed using touchprint cytology, which can introduce significant delay into the procedure. Spectral imaging is forming a multi-plane image where reflected intensities from a number of spectral bands are recorded at each pixel in the spatial plane. We investigate the possibility of using spectral imaging to assess sentinel lymph nodes of breast cancer patients with a view to eventually developing an optical technique that could significantly reduce the time required to perform this procedure. We investigate previously reported spectra of normal and metastatic tissue in the visible and near infrared region, using them as the basis of dummy spectral images. We analyse these images using the spectral angle map (SAM), a tool routinely used in other fields where spectral imaging is prevalent. We simulate random noise in these images in order to determine whether the SAM can discriminate between normal and metastatic pixels as the quality of the images deteriorates. We show that even in cases where noise levels are up to 20% of the maximum signal, the spectral angle map can distinguish healthy pixels from metastatic. We believe that this makes spectral imaging a good candidate for further study in the development of an optical SLNB.

  2. A Method for Medical Diagnosis Based on Optical Fluence Rate Distribution at Tissue Surface

    PubMed Central

    Hamdy, Omnia; El-Azab, Jala; Al-Saeed, Tarek A.; Hassan, Mahmoud F.

    2017-01-01

    Optical differentiation is a promising tool in biomedical diagnosis mainly because of its safety. The optical parameters’ values of biological tissues differ according to the histopathology of the tissue and hence could be used for differentiation. The optical fluence rate distribution on tissue boundaries depends on the optical parameters. So, providing image displays of such distributions can provide a visual means of biomedical diagnosis. In this work, an experimental setup was implemented to measure the spatially-resolved steady state diffuse reflectance and transmittance of native and coagulated chicken liver and native and boiled breast chicken skin at 635 and 808 nm wavelengths laser irradiation. With the measured values, the optical parameters of the samples were calculated in vitro using a combination of modified Kubelka-Munk model and Bouguer-Beer-Lambert law. The estimated optical parameters values were substituted in the diffusion equation to simulate the fluence rate at the tissue surface using the finite element method. Results were verified with Monte-Carlo simulation. The results obtained showed that the diffuse reflectance curves and fluence rate distribution images can provide discrimination tools between different tissue types and hence can be used for biomedical diagnosis. PMID:28930158

  3. A Method for Medical Diagnosis Based on Optical Fluence Rate Distribution at Tissue Surface.

    PubMed

    Hamdy, Omnia; El-Azab, Jala; Al-Saeed, Tarek A; Hassan, Mahmoud F; Solouma, Nahed H

    2017-09-20

    Optical differentiation is a promising tool in biomedical diagnosis mainly because of its safety. The optical parameters' values of biological tissues differ according to the histopathology of the tissue and hence could be used for differentiation. The optical fluence rate distribution on tissue boundaries depends on the optical parameters. So, providing image displays of such distributions can provide a visual means of biomedical diagnosis. In this work, an experimental setup was implemented to measure the spatially-resolved steady state diffuse reflectance and transmittance of native and coagulated chicken liver and native and boiled breast chicken skin at 635 and 808 nm wavelengths laser irradiation. With the measured values, the optical parameters of the samples were calculated in vitro using a combination of modified Kubelka-Munk model and Bouguer-Beer-Lambert law. The estimated optical parameters values were substituted in the diffusion equation to simulate the fluence rate at the tissue surface using the finite element method. Results were verified with Monte-Carlo simulation. The results obtained showed that the diffuse reflectance curves and fluence rate distribution images can provide discrimination tools between different tissue types and hence can be used for biomedical diagnosis.

  4. Validation of columnar CsI x-ray detector responses obtained with hybridMANTIS, a CPU-GPU Monte Carlo code for coupled x-ray, electron, and optical transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Diksha; Badano, Aldo

    2013-03-15

    Purpose: hybridMANTIS is a Monte Carlo package for modeling indirect x-ray imagers using columnar geometry based on a hybrid concept that maximizes the utilization of available CPU and graphics processing unit processors in a workstation. Methods: The authors compare hybridMANTIS x-ray response simulations to previously published MANTIS and experimental data for four cesium iodide scintillator screens. These screens have a variety of reflective and absorptive surfaces with different thicknesses. The authors analyze hybridMANTIS results in terms of modulation transfer function and calculate the root mean square difference and Swank factors from simulated and experimental results. Results: The comparison suggests thatmore » hybridMANTIS better matches the experimental data as compared to MANTIS, especially at high spatial frequencies and for the thicker screens. hybridMANTIS simulations are much faster than MANTIS with speed-ups up to 5260. Conclusions: hybridMANTIS is a useful tool for improved description and optimization of image acquisition stages in medical imaging systems and for modeling the forward problem in iterative reconstruction algorithms.« less

  5. Development and validation of real-time simulation of X-ray imaging with respiratory motion.

    PubMed

    Vidal, Franck P; Villard, Pierre-Frédéric

    2016-04-01

    We present a framework that combines evolutionary optimisation, soft tissue modelling and ray tracing on GPU to simultaneously compute the respiratory motion and X-ray imaging in real-time. Our aim is to provide validated building blocks with high fidelity to closely match both the human physiology and the physics of X-rays. A CPU-based set of algorithms is presented to model organ behaviours during respiration. Soft tissue deformation is computed with an extension of the Chain Mail method. Rigid elements move according to kinematic laws. A GPU-based surface rendering method is proposed to compute the X-ray image using the Beer-Lambert law. It is provided as an open-source library. A quantitative validation study is provided to objectively assess the accuracy of both components: (i) the respiration against anatomical data, and (ii) the X-ray against the Beer-Lambert law and the results of Monte Carlo simulations. Our implementation can be used in various applications, such as interactive medical virtual environment to train percutaneous transhepatic cholangiography in interventional radiology, 2D/3D registration, computation of digitally reconstructed radiograph, simulation of 4D sinograms to test tomography reconstruction tools. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. A Computational Framework for Bioimaging Simulation.

    PubMed

    Watabe, Masaki; Arjunan, Satya N V; Fukushima, Seiya; Iwamoto, Kazunari; Kozuka, Jun; Matsuoka, Satomi; Shindo, Yuki; Ueda, Masahiro; Takahashi, Koichi

    2015-01-01

    Using bioimaging technology, biologists have attempted to identify and document analytical interpretations that underlie biological phenomena in biological cells. Theoretical biology aims at distilling those interpretations into knowledge in the mathematical form of biochemical reaction networks and understanding how higher level functions emerge from the combined action of biomolecules. However, there still remain formidable challenges in bridging the gap between bioimaging and mathematical modeling. Generally, measurements using fluorescence microscopy systems are influenced by systematic effects that arise from stochastic nature of biological cells, the imaging apparatus, and optical physics. Such systematic effects are always present in all bioimaging systems and hinder quantitative comparison between the cell model and bioimages. Computational tools for such a comparison are still unavailable. Thus, in this work, we present a computational framework for handling the parameters of the cell models and the optical physics governing bioimaging systems. Simulation using this framework can generate digital images of cell simulation results after accounting for the systematic effects. We then demonstrate that such a framework enables comparison at the level of photon-counting units.

  7. Light field geometry of a Standard Plenoptic Camera.

    PubMed

    Hahne, Christopher; Aggoun, Amar; Haxha, Shyqyri; Velisavljevic, Vladan; Fernández, Juan Carlos Jácome

    2014-11-03

    The Standard Plenoptic Camera (SPC) is an innovation in photography, allowing for acquiring two-dimensional images focused at different depths, from a single exposure. Contrary to conventional cameras, the SPC consists of a micro lens array and a main lens projecting virtual lenses into object space. For the first time, the present research provides an approach to estimate the distance and depth of refocused images extracted from captures obtained by an SPC. Furthermore, estimates for the position and baseline of virtual lenses which correspond to an equivalent camera array are derived. On the basis of paraxial approximation, a ray tracing model employing linear equations has been developed and implemented using Matlab. The optics simulation tool Zemax is utilized for validation purposes. By designing a realistic SPC, experiments demonstrate that a predicted image refocusing distance at 3.5 m deviates by less than 11% from the simulation in Zemax, whereas baseline estimations indicate no significant difference. Applying the proposed methodology will enable an alternative to the traditional depth map acquisition by disparity analysis.

  8. Medium-energy heavy-ion single-event-burnout imaging of power MOSFETs

    NASA Astrophysics Data System (ADS)

    Musseau, O.; Torres, A.; Campbell, A. B.; Knudson, A. R.; Buchner, S.; Fischer, B.; Schlogl, M.; Briand, P.

    1999-12-01

    We present the first experimental determination of the SEB sensitive area in a power MOSFET irradiated with a high-LET heavy-ion microbeam. We used a spectroscopy technique to perform coincident measurements of the charge collected in both source and drain junctions together, with a nondestructive technique (current limitation). The resulting charge collection images are related to the physical structure of the individual cells. These experimental data reveal the complex 3-dimensional behavior of a real structure, which can not easily be simulated using available tools. As the drain voltage is increased, the onset of burnout is reached, characterized by a sudden change in the charge collection image. "Hot spots" are observed where the collected charge reaches its maximum value. Those spots, due to burnout triggering events, correspond to areas where the silicon is degraded through thermal effects along a single ion track. This direct observation of SEB sensitive areas as applications for, either device hardening, by modifying doping profiles or layout of the cells, or for code calibration and device simulation.

  9. A simulation-based study on the influence of beam hardening in X-ray computed tomography for dimensional metrology.

    PubMed

    Lifton, Joseph J; Malcolm, Andrew A; McBride, John W

    2015-01-01

    X-ray computed tomography (CT) is a radiographic scanning technique for visualising cross-sectional images of an object non-destructively. From these cross-sectional images it is possible to evaluate internal dimensional features of a workpiece which may otherwise be inaccessible to tactile and optical instruments. Beam hardening is a physical process that degrades the quality of CT images and has previously been suggested to influence dimensional measurements. Using a validated simulation tool, the influence of spectrum pre-filtration and beam hardening correction are evaluated for internal and external dimensional measurements. Beam hardening is shown to influence internal and external dimensions in opposition, and to have a greater influence on outer dimensions compared to inner dimensions. The results suggest the combination of spectrum pre-filtration and a local gradient-based surface determination method are able to greatly reduce the influence of beam hardening in X-ray CT for dimensional metrology.

  10. Prostate-cancer diagnosis by non-invasive prostatic Zinc mapping using X-Ray Fluorescence (XRF)

    NASA Astrophysics Data System (ADS)

    Cortesi, Marco

    At present, the major screening tools (PSA, DRE, TRUS) for prostate cancer lack sensitivity and specificity, and none can distinguish between low-grade indolent cancer and high-grade lethal one. The situation calls for the promotion of alternative approaches, with better detection sensitivity and specificity, to provide more efficient selection of patients to biopsy and with possible guidance of the biopsy needles. The prime objective of the present work was the development of a novel non-invasive method and tool for promoting detection, localization, diagnosis and follow-up of PCa. The method is based on in-vivo imaging of Zn distribution in the peripheral zone of the prostate, by a trans-rectal X-ray fluorescence (XRF) probe. Local Zn levels, measured in 1--4 mm3 fresh tissue biopsy segments from an extensive clinical study involving several hundred patients, showed an unambiguous correlation with the histological classification of the tissue (Non-Cancer or PCa), and a systematic positive correlation of its depletion level with the cancer-aggressiveness grade (Gleason classification). A detailed analysis of computer-simulated Zn-concentration images (with input parameters from clinical data) disclosed the potential of the method to provide sensitive and specific detection and localization of the lesion, its grade and extension. Furthermore, it also yielded invaluable data on some requirements, such as the image resolution and counting-statistics, requested from a trans-rectal XRF probe for in-vivo recording of prostatic-Zn maps in patients. By means of systematic table-top experiments on prostate-phantoms comprising tumor-like inclusions, followed by dedicated Monte Carlo simulations, the XRF-probe and its components have been designed and optimized. Multi-parameter analysis of the experimental data confirmed the simulation estimations of the XRF detection system in terms of: delivered dose, counting statistics, scanning resolution, target-volume size and the accuracy of locating at various depths of small-volume tumor-like inclusions in tissue-phantoms. The clinical study, the Monte Carlo simulations and the analysis of Zn-map images provided essential information and promising vision on the potential performance of the Zn-based PCa detection concept. Simulations focusing on medical-probe design and its performance at permissible radiation doses yielded positive results - confirmed by a series of systematic laboratory experiments with a table-top XRF system.

  11. PICS: SIMULATIONS OF STRONG GRAVITATIONAL LENSING IN GALAXY CLUSTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Nan; Gladders, Michael D.; Florian, Michael K.

    2016-09-01

    Gravitational lensing has become one of the most powerful tools available for investigating the “dark side” of the universe. Cosmological strong gravitational lensing, in particular, probes the properties of the dense cores of dark matter halos over decades in mass and offers the opportunity to study the distant universe at flux levels and spatial resolutions otherwise unavailable. Studies of strongly lensed variable sources offer even further scientific opportunities. One of the challenges in realizing the potential of strong lensing is to understand the statistical context of both the individual systems that receive extensive follow-up study, as well as that ofmore » the larger samples of strong lenses that are now emerging from survey efforts. Motivated by these challenges, we have developed an image simulation pipeline, Pipeline for Images of Cosmological Strong lensing (PICS), to generate realistic strong gravitational lensing signals from group- and cluster-scale lenses. PICS uses a low-noise and unbiased density estimator based on (resampled) Delaunay Tessellations to calculate the density field; lensed images are produced by ray-tracing images of actual galaxies from deep Hubble Space Telescope observations. Other galaxies, similarly sampled, are added to fill in the light cone. The pipeline further adds cluster member galaxies and foreground stars into the lensed images. The entire image ensemble is then observed using a realistic point-spread function that includes appropriate detector artifacts for bright stars. Noise is further added, including such non-Gaussian elements as noise window-paning from mosaiced observations, residual bad pixels, and cosmic rays. The aim is to produce simulated images that appear identical—to the eye (expert or otherwise)—to real observations in various imaging surveys.« less

  12. PICS: SIMULATIONS OF STRONG GRAVITATIONAL LENSING IN GALAXY CLUSTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Nan; Gladders, Michael D.; Rangel, Esteban M.

    2016-08-29

    Gravitational lensing has become one of the most powerful tools available for investigating the “dark side” of the universe. Cosmological strong gravitational lensing, in particular, probes the properties of the dense cores of dark matter halos over decades in mass and offers the opportunity to study the distant universe at flux levels and spatial resolutions otherwise unavailable. Studies of strongly lensed variable sources offer even further scientific opportunities. One of the challenges in realizing the potential of strong lensing is to understand the statistical context of both the individual systems that receive extensive follow-up study, as well as that ofmore » the larger samples of strong lenses that are now emerging from survey efforts. Motivated by these challenges, we have developed an image simulation pipeline, Pipeline for Images of Cosmological Strong lensing (PICS), to generate realistic strong gravitational lensing signals from group- and cluster-scale lenses. PICS uses a low-noise and unbiased density estimator based on (resampled) Delaunay Tessellations to calculate the density field; lensed images are produced by ray-tracing images of actual galaxies from deep Hubble Space Telescope observations. Other galaxies, similarly sampled, are added to fill in the light cone. The pipeline further adds cluster member galaxies and foreground stars into the lensed images. The entire image ensemble is then observed using a realistic point-spread function that includes appropriate detector artifacts for bright stars. Noise is further added, including such non-Gaussian elements as noise window-paning from mosaiced observations, residual bad pixels, and cosmic rays. The aim is to produce simulated images that appear identical—to the eye (expert or otherwise)—to real observations in various imaging surveys.« less

  13. Three-dimensional multimodality fusion imaging as an educational and planning tool for deep-seated meningiomas.

    PubMed

    Sato, Mitsuru; Tateishi, Kensuke; Murata, Hidetoshi; Kin, Taichi; Suenaga, Jun; Takase, Hajime; Yoneyama, Tomohiro; Nishii, Toshiaki; Tateishi, Ukihide; Yamamoto, Tetsuya; Saito, Nobuhito; Inoue, Tomio; Kawahara, Nobutaka

    2018-06-26

    The utility of surgical simulation with three-dimensional multimodality fusion imaging (3D-MFI) has been demonstrated. However, its potential in deep-seated brain lesions remains unknown. The aim of this study was to investigate the impact of 3D-MFI in deep-seated meningioma operations. Fourteen patients with deeply located meningiomas were included in this study. We constructed 3D-MFIs by fusing high-resolution magnetic resonance (MR) and computed tomography (CT) images with a rotational digital subtraction angiogram (DSA) in all patients. The surgical procedure was simulated by 3D-MFI prior to operation. To assess the impact on neurosurgical education, the objective values of surgical simulation by 3D-MFIs/virtual reality (VR) video were evaluated. To validate the quality of 3D-MFIs, intraoperative findings were compared. The identification rate (IR) and positive predictive value (PPV) for the tumor feeding arteries and involved perforating arteries and veins were also assessed for quality assessment of 3D-MFI. After surgical simulation by 3D-MFIs, near-total resection was achieved in 13 of 14 (92.9%) patients without neurological complications. 3D-MFIs significantly contributed to the understanding of surgical anatomy and optimal surgical view (p < .0001) and learning how to preserve critical vessels (p < .0001) and resect tumors safety and extensively (p < .0001) by neurosurgical residents/fellows. The IR of 3D-MFI for tumor-feeding arteries and perforating arteries and veins was 100% and 92.9%, respectively. The PPV of 3D-MFI for tumor-feeding arteries and perforating arteries and veins was 98.8% and 76.5%, respectively. 3D-MFI contributed to learn skull base meningioma surgery. Also, 3D-MFI provided high quality to identify critical anatomical structures within or adjacent to deep-seated meningiomas. Thus, 3D-MFI is promising educational and surgical planning tool for meningiomas in deep-seated regions.

  14. The cutting edge - Micro-CT for quantitative toolmark analysis of sharp force trauma to bone.

    PubMed

    Norman, D G; Watson, D G; Burnett, B; Fenne, P M; Williams, M A

    2018-02-01

    Toolmark analysis involves examining marks created on an object to identify the likely tool responsible for creating those marks (e.g., a knife). Although a potentially powerful forensic tool, knife mark analysis is still in its infancy and the validation of imaging techniques as well as quantitative approaches is ongoing. This study builds on previous work by simulating real-world stabbings experimentally and statistically exploring quantitative toolmark properties, such as cut mark angle captured by micro-CT imaging, to predict the knife responsible. In Experiment 1 a mechanical stab rig and two knives were used to create 14 knife cut marks on dry pig ribs. The toolmarks were laser and micro-CT scanned to allow for quantitative measurements of numerous toolmark properties. The findings from Experiment 1 demonstrated that both knives produced statistically different cut mark widths, wall angle and shapes. Experiment 2 examined knife marks created on fleshed pig torsos with conditions designed to better simulate real-world stabbings. Eight knives were used to generate 64 incision cut marks that were also micro-CT scanned. Statistical exploration of these cut marks suggested that knife type, serrated or plain, can be predicted from cut mark width and wall angle. Preliminary results suggest that knives type can be predicted from cut mark width, and that knife edge thickness correlates with cut mark width. An additional 16 cut marks walls were imaged for striation marks using scanning electron microscopy with results suggesting that this approach might not be useful for knife mark analysis. Results also indicated that observer judgements of cut mark shape were more consistent when rated from micro-CT images than light microscopy images. The potential to combine micro-CT data, medical grade CT data and photographs to develop highly realistic virtual models for visualisation and 3D printing is also demonstrated. This is the first study to statistically explore simulated real-world knife marks imaged by micro-CT to demonstrate the potential of quantitative approaches in knife mark analysis. Findings and methods presented in this study are relevant to both forensic toolmark researchers as well as practitioners. Limitations of the experimental methodologies and imaging techniques are discussed, and further work is recommended. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. The Characterization of a DIRSIG Simulation Environment to Support the Inter-Calibration of Spaceborne Sensors

    NASA Technical Reports Server (NTRS)

    Ambeau, Brittany L.; Gerace, Aaron D.; Montanaro, Matthew; McCorkel, Joel

    2016-01-01

    Climate change studies require long-term, continuous records that extend beyond the lifetime, and the temporal resolution, of a single remote sensing satellite sensor. The inter-calibration of spaceborne sensors is therefore desired to provide spatially, spectrally, and temporally homogeneous datasets. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool is a first principle-based synthetic image generation model that has the potential to characterize the parameters that impact the accuracy of the inter-calibration of spaceborne sensors. To demonstrate the potential utility of the model, we compare the radiance observed in real image data to the radiance observed in simulated image from DIRSIG. In the present work, a synthetic landscape of the Algodones Sand Dunes System is created. The terrain is facetized using a 2-meter digital elevation model generated from NASA Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) imager. The material spectra are assigned using hyperspectral measurements of sand collected from the Algodones Sand Dunes System. Lastly, the bidirectional reflectance distribution function (BRDF) properties are assigned to the modeled terrain using the Moderate Resolution Imaging Spectroradiometer (MODIS) BRDF product in conjunction with DIRSIG's Ross-Li capability. The results of this work indicate that DIRSIG is in good agreement with real image data. The potential sources of residual error are identified and the possibilities for future work are discussed..

  16. The characterization of a DIRSIG simulation environment to support the inter-calibration of spaceborne sensors

    NASA Astrophysics Data System (ADS)

    Ambeau, Brittany L.; Gerace, Aaron D.; Montanaro, Matthew; McCorkel, Joel

    2016-09-01

    Climate change studies require long-term, continuous records that extend beyond the lifetime, and the temporal resolution, of a single remote sensing satellite sensor. The inter-calibration of spaceborne sensors is therefore desired to provide spatially, spectrally, and temporally homogeneous datasets. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool is a first principle-based synthetic image generation model that has the potential to characterize the parameters that impact the accuracy of the inter-calibration of spaceborne sensors. To demonstrate the potential utility of the model, we compare the radiance observed in real image data to the radiance observed in simulated image from DIRSIG. In the present work, a synthetic landscape of the Algodones Sand Dunes System is created. The terrain is facetized using a 2-meter digital elevation model generated from NASA Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) imager. The material spectra are assigned using hyperspectral measurements of sand collected from the Algodones Sand Dunes System. Lastly, the bidirectional reflectance distribution function (BRDF) properties are assigned to the modeled terrain using the Moderate Resolution Imaging Spectroradiometer (MODIS) BRDF product in conjunction with DIRSIG's Ross-Li capability. The results of this work indicate that DIRSIG is in good agreement with real image data. The potential sources of residual error are identified and the possibilities for future work are discussed.

  17. Real-time scene and signature generation for ladar and imaging sensors

    NASA Astrophysics Data System (ADS)

    Swierkowski, Leszek; Christie, Chad L.; Antanovskii, Leonid; Gouthas, Efthimios

    2014-05-01

    This paper describes development of two key functionalities within the VIRSuite scene simulation program, broadening its scene generation capabilities and increasing accuracy of thermal signatures. Firstly, a new LADAR scene generation module has been designed. It is capable of simulating range imagery for Geiger mode LADAR, in addition to the already existing functionality for linear mode systems. Furthermore, a new 3D heat diffusion solver has been developed within the VIRSuite signature prediction module. It is capable of calculating the temperature distribution in complex three-dimensional objects for enhanced dynamic prediction of thermal signatures. With these enhancements, VIRSuite is now a robust tool for conducting dynamic simulation for missiles with multi-mode seekers.

  18. Virtual Reality in the Assessment and Treatment of Weight-Related Disorders.

    PubMed

    Wiederhold, Brenda K; Riva, Giuseppe; Gutiérrez-Maldonado, José

    2016-02-01

    Virtual Reality (VR) has, for the past two decades, proven to be a useful adjunctive tool for both assessment and treatment of patients with eating disorders and obesity. VR allows an individual to enter scenarios that simulate real-life situations and to encounter food cues known to trigger his/her disordered eating behavior. As well, VR enables three-dimensional figures of the patient's body to be presented, helping him/her to reach an awareness of body image distortion and then providing the opportunity to confront and correct distortions, resulting in a more realistic body image and a decrease in body image dissatisfaction. In this paper, we describe seminal studies in this research area.

  19. Muon tomography imaging algorithms for nuclear threat detection inside large volume containers with the Muon Portal detector

    NASA Astrophysics Data System (ADS)

    Riggi, S.; Antonuccio-Delogu, V.; Bandieramonte, M.; Becciani, U.; Costa, A.; La Rocca, P.; Massimino, P.; Petta, C.; Pistagna, C.; Riggi, F.; Sciacca, E.; Vitello, F.

    2013-11-01

    Muon tomographic visualization techniques try to reconstruct a 3D image as close as possible to the real localization of the objects being probed. Statistical algorithms under test for the reconstruction of muon tomographic images in the Muon Portal Project are discussed here. Autocorrelation analysis and clustering algorithms have been employed within the context of methods based on the Point Of Closest Approach (POCA) reconstruction tool. An iterative method based on the log-likelihood approach was also implemented. Relative merits of all such methods are discussed, with reference to full GEANT4 simulations of different scenarios, incorporating medium and high-Z objects inside a container.

  20. MARS Science Laboratory Post-Landing Location Estimation Using Post2 Trajectory Simulation

    NASA Technical Reports Server (NTRS)

    Davis, J. L.; Shidner, Jeremy D.; Way, David W.

    2013-01-01

    The Mars Science Laboratory (MSL) Curiosity rover landed safely on Mars August 5th, 2012 at 10:32 PDT, Earth Received Time. Immediately following touchdown confirmation, best estimates of position were calculated to assist in determining official MSL locations during entry, descent and landing (EDL). Additionally, estimated balance mass impact locations were provided and used to assess how predicted locations compared to actual locations. For MSL, the Program to Optimize Simulated Trajectories II (POST2) was the primary trajectory simulation tool used to predict and assess EDL performance from cruise stage separation through rover touchdown and descent stage impact. This POST2 simulation was used during MSL operations for EDL trajectory analyses in support of maneuver decisions and imaging MSL during EDL. This paper presents the simulation methodology used and results of pre/post-landing MSL location estimates and associated imagery from Mars Reconnaissance Orbiter s (MRO) High Resolution Imaging Science Experiment (HiRISE) camera. To generate these estimates, the MSL POST2 simulation nominal and Monte Carlo data, flight telemetry from onboard navigation, relay orbiter positions from MRO and Mars Odyssey and HiRISE generated digital elevation models (DEM) were utilized. A comparison of predicted rover and balance mass location estimations against actual locations are also presented.

  1. Some effects on SPM based surface measurement

    NASA Astrophysics Data System (ADS)

    Wenhao, Huang; Yuhang, Chen

    2005-01-01

    The scanning probe microscope (SPM) has been used as a powerful tool for nanotechnology, especially in surface nanometrology. However, there are a lot of false images and modifications during the SPM measurement on the surfaces. This is because of the complex interaction between the SPM tip and the surface. The origin is not only due to the tip material or shape, but also to the structure of the sample. So people are paying much attention to draw true information from the SPM images. In this paper, we present some simulation methods and reconstruction examples for the microstructures and surface roughness based on SPM measurement. For example, in AFM measurement, we consider the effects of tip shape and dimension, also the surface topography distribution in both height and space. Some simulation results are compared with other measurement methods to verify the reliability.

  2. Temporal Variability of Observed and Simulated Hyperspectral Earth Reflectance

    NASA Technical Reports Server (NTRS)

    Roberts, Yolanda; Pilewskie, Peter; Kindel, Bruce; Feldman, Daniel; Collins, William D.

    2012-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) is a climate observation system designed to study Earth's climate variability with unprecedented absolute radiometric accuracy and SI traceability. Observation System Simulation Experiments (OSSEs) were developed using GCM output and MODTRAN to simulate CLARREO reflectance measurements during the 21st century as a design tool for the CLARREO hyperspectral shortwave imager. With OSSE simulations of hyperspectral reflectance, Feldman et al. [2011a,b] found that shortwave reflectance is able to detect changes in climate variables during the 21st century and improve time-to-detection compared to broadband measurements. The OSSE has been a powerful tool in the design of the CLARREO imager and for understanding the effect of climate change on the spectral variability of reflectance, but it is important to evaluate how well the OSSE simulates the Earth's present-day spectral variability. For this evaluation we have used hyperspectral reflectance measurements from the Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY), a shortwave spectrometer that was operational between March 2002 and April 2012. To study the spectral variability of SCIAMACHY-measured and OSSE-simulated reflectance, we used principal component analysis (PCA), a spectral decomposition technique that identifies dominant modes of variability in a multivariate data set. Using quantitative comparisons of the OSSE and SCIAMACHY PCs, we have quantified how well the OSSE captures the spectral variability of Earth?s climate system at the beginning of the 21st century relative to SCIAMACHY measurements. These results showed that the OSSE and SCIAMACHY data sets share over 99% of their total variance in 2004. Using the PCs and the temporally distributed reflectance spectra projected onto the PCs (PC scores), we can study the temporal variability of the observed and simulated reflectance spectra. Multivariate time series analysis of the PC scores using techniques such as Singular Spectrum Analysis (SSA) and Multichannel SSA will provide information about the temporal variability of the dominant variables. Quantitative comparison techniques can evaluate how well the OSSE reproduces the temporal variability observed by SCIAMACHY spectral reflectance measurements during the first decade of the 21st century. PCA of OSSE-simulated reflectance can also be used to study how the dominant spectral variables change on centennial scales for forced and unforced climate change scenarios. To have confidence in OSSE predictions of the spectral variability of hyperspectral reflectance, it is first necessary for us to evaluate the degree to which the OSSE simulations are able to reproduce the Earth?s present-day spectral variability.

  3. Molecular Optical Simulation Environment (MOSE): A Platform for the Simulation of Light Propagation in Turbid Media

    PubMed Central

    Ren, Shenghan; Chen, Xueli; Wang, Hailong; Qu, Xiaochao; Wang, Ge; Liang, Jimin; Tian, Jie

    2013-01-01

    The study of light propagation in turbid media has attracted extensive attention in the field of biomedical optical molecular imaging. In this paper, we present a software platform for the simulation of light propagation in turbid media named the “Molecular Optical Simulation Environment (MOSE)”. Based on the gold standard of the Monte Carlo method, MOSE simulates light propagation both in tissues with complicated structures and through free-space. In particular, MOSE synthesizes realistic data for bioluminescence tomography (BLT), fluorescence molecular tomography (FMT), and diffuse optical tomography (DOT). The user-friendly interface and powerful visualization tools facilitate data analysis and system evaluation. As a major measure for resource sharing and reproducible research, MOSE aims to provide freeware for research and educational institutions, which can be downloaded at http://www.mosetm.net. PMID:23577215

  4. Targeted delivery of cancer-specific multimodal contrast agents for intraoperative detection of tumor boundaries and therapeutic margins

    NASA Astrophysics Data System (ADS)

    Xu, Ronald X.; Xu, Jeff S.; Huang, Jiwei; Tweedle, Michael F.; Schmidt, Carl; Povoski, Stephen P.; Martin, Edward W.

    2010-02-01

    Background: Accurate assessment of tumor boundaries and intraoperative detection of therapeutic margins are important oncologic principles for minimal recurrence rates and improved long-term outcomes. However, many existing cancer imaging tools are based on preoperative image acquisition and do not provide real-time intraoperative information that supports critical decision-making in the operating room. Method: Poly lactic-co-glycolic acid (PLGA) microbubbles (MBs) and nanobubbles (NBs) were synthesized by a modified double emulsion method. The MB and NB surfaces were conjugated with CC49 antibody to target TAG-72 antigen, a human glycoprotein complex expressed in many epithelial-derived cancers. Multiple imaging agents were encapsulated in MBs and NBs for multimodal imaging. Both one-step and multi-step cancer targeting strategies were explored. Active MBs/NBs were also fabricated for therapeutic margin assessment in cancer ablation therapies. Results: The multimodal contrast agents and the cancer-targeting strategies were tested on tissue simulating phantoms, LS174 colon cancer cell cultures, and cancer xenograft nude mice. Concurrent multimodal imaging was demonstrated using fluorescence and ultrasound imaging modalities. Technical feasibility of using active MBs and portable imaging tools such as ultrasound for intraoperative therapeutic margin assessment was demonstrated in a biological tissue model. Conclusion: The cancer-specific multimodal contrast agents described in this paper have the potential for intraoperative detection of tumor boundaries and therapeutic margins.

  5. Molecular dynamics and dynamic Monte-Carlo simulation of irradiation damage with focused ion beams

    NASA Astrophysics Data System (ADS)

    Ohya, Kaoru

    2017-03-01

    The focused ion beam (FIB) has become an important tool for micro- and nanostructuring of samples such as milling, deposition and imaging. However, this leads to damage of the surface on the nanometer scale from implanted projectile ions and recoiled material atoms. It is therefore important to investigate each kind of damage quantitatively. We present a dynamic Monte-Carlo (MC) simulation code to simulate the morphological and compositional changes of a multilayered sample under ion irradiation and a molecular dynamics (MD) simulation code to simulate dose-dependent changes in the backscattering-ion (BSI)/secondary-electron (SE) yields of a crystalline sample. Recent progress in the codes for research to simulate the surface morphology and Mo/Si layers intermixing in an EUV lithography mask irradiated with FIBs, and the crystalline orientation effect on BSI and SE yields relating to the channeling contrast in scanning ion microscopes, is also presented.

  6. Fourth-order partial differential equation noise removal on welding images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halim, Suhaila Abd; Ibrahim, Arsmah; Sulong, Tuan Nurul Norazura Tuan

    2015-10-22

    Partial differential equation (PDE) has become one of the important topics in mathematics and is widely used in various fields. It can be used for image denoising in the image analysis field. In this paper, a fourth-order PDE is discussed and implemented as a denoising method on digital images. The fourth-order PDE is solved computationally using finite difference approach and then implemented on a set of digital radiographic images with welding defects. The performance of the discretized model is evaluated using Peak Signal to Noise Ratio (PSNR). Simulation is carried out on the discretized model on different level of Gaussianmore » noise in order to get the maximum PSNR value. The convergence criteria chosen to determine the number of iterations required is measured based on the highest PSNR value. Results obtained show that the fourth-order PDE model produced promising results as an image denoising tool compared with median filter.« less

  7. Segmentation and Tracking of Cytoskeletal Filaments Using Open Active Contours

    PubMed Central

    Smith, Matthew B.; Li, Hongsheng; Shen, Tian; Huang, Xiaolei; Yusuf, Eddy; Vavylonis, Dimitrios

    2010-01-01

    We use open active contours to quantify cytoskeletal structures imaged by fluorescence microscopy in two and three dimensions. We developed an interactive software tool for segmentation, tracking, and visualization of individual fibers. Open active contours are parametric curves that deform to minimize the sum of an external energy derived from the image and an internal bending and stretching energy. The external energy generates (i) forces that attract the contour toward the central bright line of a filament in the image, and (ii) forces that stretch the active contour toward the ends of bright ridges. Images of simulated semiflexible polymers with known bending and torsional rigidity are analyzed to validate the method. We apply our methods to quantify the conformations and dynamics of actin in two examples: actin filaments imaged by TIRF microscopy in vitro, and actin cables in fission yeast imaged by spinning disk confocal microscopy. PMID:20814909

  8. Design and performance evaluation of the imaging payload for a remote sensing satellite

    NASA Astrophysics Data System (ADS)

    Abolghasemi, Mojtaba; Abbasi-Moghadam, Dariush

    2012-11-01

    In this paper an analysis method and corresponding analytical tools for design of the experimental imaging payload (IMPL) of a remote sensing satellite (SINA-1) are presented. We begin with top-level customer system performance requirements and constraints and derive the critical system and component parameters, then analyze imaging payload performance until a preliminary design that meets customer requirements. We consider system parameters and components composing the image chain for imaging payload system which includes aperture, focal length, field of view, image plane dimensions, pixel dimensions, detection quantum efficiency, and optical filter requirements. The performance analysis is accomplished by calculating the imaging payload's SNR (signal-to-noise ratio), and imaging resolution. The noise components include photon noise due to signal scene and atmospheric background, cold shield, out-of-band optical filter leakage and electronic noise. System resolution is simulated through cascaded modulation transfer functions (MTFs) and includes effects due to optics, image sampling, and system motion. Calculations results for the SINA-1 satellite are also presented.

  9. Calibrating a hydraulic model using water levels derived from time series high-resolution Radarsat-2 synthetic aperture radar images and elevation data

    NASA Astrophysics Data System (ADS)

    Trudel, M.; Desrochers, N.; Leconte, R.

    2017-12-01

    Knowledge of water extent (WE) and level (WL) of rivers is necessary to calibrate and validate hydraulic models and thus to better simulate and forecast floods. Synthetic aperture radar (SAR) has demonstrated its potential for delineating water bodies, as backscattering of water is much lower than that of other natural surfaces. The ability of SAR to obtain information despite cloud cover makes it an interesting tool for temporal monitoring of water bodies. The delineation of WE combined with a high-resolution digital terrain model (DTM) allows extracting WL. However, most research using SAR data to calibrate hydraulic models has been carried out using one or two images. The objectives of this study is to use WL derived from time series high resolution Radarsat-2 SAR images for the calibration of a 1-D hydraulic model (HEC-RAS). Twenty high-resolution (5 m) Radarsat-2 images were acquired over a 40 km reach of the Athabasca River, in northern Alberta, Canada, between 2012 and 2016, covering both low and high flow regimes. A high-resolution (2m) DTM was generated combining information from LIDAR data and bathymetry acquired between 2008 and 2016 by boat surveying. The HEC-RAS model was implemented on the Athabasca River to simulate WL using cross-sections spaced by 100 m. An image histogram thresholding method was applied on each Radarsat-2 image to derive WE. WE were then compared against each cross-section to identify those were the slope of the banks is not too abrupt and therefore amenable to extract WL. 139 observations of WL at different locations along the river reach and with streamflow measurements were used to calibrate the HEC-RAS model. The RMSE between SAR-derived and simulated WL is under 0.35 m. Validation was performed using in situ observations of WL measured in 2008, 2012 and 2016. The RMSE between the simulated water levels calibrated with SAR images and in situ observations is less than 0.20 m. In addition, a critical success index (CSI) was performed to compare the WE simulated by HEC-RAS and that derived from SARs images. The CSI is higher than 0.85 for each date, which means that simulated WE is highly similar to the WE derived from SARs images. Thereby, the results of our analysis indicate that calibration of a hydraulic model can be performed from WL derived from time series of high-resolution SAR images.

  10. Changing learning with new interactive and media-rich instruction environments: virtual labs case study report.

    PubMed

    Huang, Camillan

    2003-01-01

    Technology has created a new dimension for visual teaching and learning with web-delivered interactive media. The Virtual Labs Project has embraced this technology with instructional design and evaluation methodologies behind the simPHYSIO suite of simulation-based, online interactive teaching modules in physiology for the Stanford students. In addition, simPHYSIO provides the convenience of anytime web-access and a modular structure that allows for personalization and customization of the learning material. This innovative tool provides a solid delivery and pedagogical backbone that can be applied to developing an interactive simulation-based training tool for the use and management of the Picture Archiving and Communication System (PACS) image information system. The disparity in the knowledge between health and IT professionals can be bridged by providing convenient modular teaching tools to fill the gaps in knowledge. An innovative teaching method in the whole PACS is deemed necessary for its successful implementation and operation since it has become widely distributed with many interfaces, components, and customizations. This paper will discuss the techniques for developing an interactive-based teaching tool, a case study of its implementation, and a perspective for applying this approach to an online PACS training tool. Copyright 2002 Elsevier Science Ltd.

  11. TU-C-17A-03: An Integrated Contour Evaluation Software Tool Using Supervised Pattern Recognition for Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, H; Tan, J; Kavanaugh, J

    Purpose: Radiotherapy (RT) contours delineated either manually or semiautomatically require verification before clinical usage. Manual evaluation is very time consuming. A new integrated software tool using supervised pattern contour recognition was thus developed to facilitate this process. Methods: The contouring tool was developed using an object-oriented programming language C# and application programming interfaces, e.g. visualization toolkit (VTK). The C# language served as the tool design basis. The Accord.Net scientific computing libraries were utilized for the required statistical data processing and pattern recognition, while the VTK was used to build and render 3-D mesh models from critical RT structures in real-timemore » and 360° visualization. Principal component analysis (PCA) was used for system self-updating geometry variations of normal structures based on physician-approved RT contours as a training dataset. The inhouse design of supervised PCA-based contour recognition method was used for automatically evaluating contour normality/abnormality. The function for reporting the contour evaluation results was implemented by using C# and Windows Form Designer. Results: The software input was RT simulation images and RT structures from commercial clinical treatment planning systems. Several abilities were demonstrated: automatic assessment of RT contours, file loading/saving of various modality medical images and RT contours, and generation/visualization of 3-D images and anatomical models. Moreover, it supported the 360° rendering of the RT structures in a multi-slice view, which allows physicians to visually check and edit abnormally contoured structures. Conclusion: This new software integrates the supervised learning framework with image processing and graphical visualization modules for RT contour verification. This tool has great potential for facilitating treatment planning with the assistance of an automatic contour evaluation module in avoiding unnecessary manual verification for physicians/dosimetrists. In addition, its nature as a compact and stand-alone tool allows for future extensibility to include additional functions for physicians’ clinical needs.« less

  12. Technical Note: Detective quantum efficiency simulation of a-Se imaging detectors using ARTEMIS.

    PubMed

    Fang, Yuan; Ito, Takaaki; Nariyuki, Fumito; Kuwabara, Takao; Badano, Aldo; Karim, Karim S

    2017-08-01

    This work studies the detective quantum efficiency (DQE) of a-Se-based solid state x-ray detectors for medical imaging applications using ARTEMIS, a Monte Carlo simulation tool for modeling x-ray photon, electron and charged carrier transport in semiconductors with the presence of applied electric field. ARTEMIS is used to model the signal formation process in a-Se. The simulation model includes x-ray photon and high-energy electron interactions, and detailed electron-hole pair transport with applied detector bias taking into account drift, diffusion, Coulomb interactions, recombination and trapping. For experimental validation, the DQE performance of prototype a-Se detectors is measured following IEC Testing Standard 62220-1-3. Comparison of simulated and experimental DQE results show reasonable agreement for RQA beam qualities. Experimental validation demonstrated within 5% percentage difference between simulation and experimental DQE results for spatial frequency above 0.25 cycles/mm using uniform applied electric field for RQA beam qualities (RQA5, RQA7 and RQA9). Results include two different prototype detectors with thicknesses of 240 μm and 1 mm. ARTEMIS can be used to model the DQE of a-Se detectors as a function of x-ray energy, detector thickness, and spatial frequency. The ARTEMIS model can be used to improve understanding of the physics of x-ray interactions in a-Se and in optimization studies for the development of novel medical imaging applications. © 2017 American Association of Physicists in Medicine.

  13. Photoacoustic simulation study of chirp excitation response from different size absorbers

    NASA Astrophysics Data System (ADS)

    Jnawali, K.; Chinni, B.; Dogra, V.; Rao, N.

    2017-03-01

    Photoacoustic (PA) imaging is a hybrid imaging modality that integrates the strength of optical and ultrasound imaging. Nanosecond (ns) pulsed lasers used in current PA imaging systems are expensive, bulky and they often waste energy. We propose and evaluate, through simulations, the use of a continuous wave (CW) laser whose amplitude is linear frequency modulated (chirp) for PA imaging. The chirp signal provides signal-to-side-lobe ratio (SSR) improvement potential and full control over PA signal frequencies excited in the sample. The PA signal spectrum is a function of absorber size and the time frequencies present in the chirp. A mismatch between the input chirp spectrum and the output PA signal spectrum can affect the compressed pulse that is recovered from cross-correlating the two. We have quantitatively characterized this effect. The k-wave Matlab tool box was used to simulate PA signals in three dimensions for absorbers ranging in size from 0.1 mm to 0.6 mm, in response to laser excitation amplitude that is linearly swept from 0.5 MHz to 4 MHz. This sweep frequency range was chosen based on the spectrum analysis of a PA signal generated from ex-vivo human prostate tissue samples. In comparison, the energy wastage by a ns laser pulse was also estimated. For the chirp methodology, the compressed pulse peak amplitude, pulse width and side lobe structure parameters were extracted for different size absorbers. While the SSR increased 6 fold with absorber size, the pulse width decreased by 25%.

  14. Panoramic cone beam computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang Jenghwa; Zhou Lili; Wang Song

    2012-05-15

    Purpose: Cone-beam computed tomography (CBCT) is the main imaging tool for image-guided radiotherapy but its functionality is limited by a small imaging volume and restricted image position (imaged at the central instead of the treatment position for peripheral lesions to avoid collisions). In this paper, the authors present the concept of ''panoramic CBCT,'' which can image patients at the treatment position with an imaging volume as large as practically needed. Methods: In this novel panoramic CBCT technique, the target is scanned sequentially from multiple view angles. For each view angle, a half scan (180 deg. + {theta}{sub cone} where {theta}{submore » cone} is the cone angle) is performed with the imaging panel positioned in any location along the beam path. The panoramic projection images of all views for the same gantry angle are then stitched together with the direct image stitching method (i.e., according to the reported imaging position) and full-fan, half-scan CBCT reconstruction is performed using the stitched projection images. To validate this imaging technique, the authors simulated cone-beam projection images of the Mathematical Cardiac Torso (MCAT) thorax phantom for three panoramic views. Gaps, repeated/missing columns, and different exposure levels were introduced between adjacent views to simulate imperfect image stitching due to uncertainties in imaging position or output fluctuation. A modified simultaneous algebraic reconstruction technique (modified SART) was developed to reconstruct CBCT images directly from the stitched projection images. As a gold standard, full-fan, full-scan (360 deg. gantry rotation) CBCT reconstructions were also performed using projection images of one imaging panel large enough to encompass the target. Contrast-to-noise ratio (CNR) and geometric distortion were evaluated to quantify the quality of reconstructed images. Monte Carlo simulations were performed to evaluate the effect of scattering on the image quality and imaging dose for both standard and panoramic CBCT. Results: Truncated images with artifacts were observed for the CBCT reconstruction using projection images of the central view only. When the image stitching was perfect, complete reconstruction was obtained for the panoramic CBCT using the modified SART with the image quality similar to the gold standard (full-scan, full-fan CBCT using one large imaging panel). Imperfect image stitching, on the other hand, lead to (streak, line, or ring) reconstruction artifacts, reduced CNR, and/or distorted geometry. Results from Monte Carlo simulations showed that, for identical imaging quality, the imaging dose was lower for the panoramic CBCT than that acquired with one large imaging panel. For the same imaging dose, the CNR of the three-view panoramic CBCT was 50% higher than that of the regular CBCT using one big panel. Conclusions: The authors have developed a panoramic CBCT technique and demonstrated with simulation data that it can image tumors of any location for patients of any size at the treatment position with comparable or less imaging dose and time. However, the image quality of this CBCT technique is sensitive to the reconstruction artifacts caused by imperfect image stitching. Better algorithms are therefore needed to improve the accuracy of image stitching for panoramic CBCT.« less

  15. Design and simulation of EVA tools for first servicing mission of HST

    NASA Technical Reports Server (NTRS)

    Naik, Dipak; Dehoff, P. H.

    1994-01-01

    The Hubble Space Telescope (HST) was launched into near-earth orbit by the Space Shuttle Discovery on April 24, 1990. The payload of two cameras, two spectrographs, and a high-speed photometer is supplemented by three fine-guidance sensors that can be used for astronomy as well as for star tracking. A widely reported spherical aberration in the primary mirror causes HST to produce images of much lower quality than intended. A Space Shuttle repair mission in January 1994 installed small corrective mirrors that restored the full intended optical capability of the HST. The First Servicing Mission (FSM) involved considerable Extra Vehicular Activity (EVA). Special EVA tools for the FSM were designed and developed for this specific purpose. In an earlier report, the details of the Data Acquisition System developed to test the performance of the various EVA tools in ambient as well as simulated space environment were presented. The general schematic of the test setup is reproduced in this report for continuity. Although the data acquisition system was used extensively to test a number of fasteners, only the results of one test each carried on various fasteners and the Power Ratchet Tool are included in this report.

  16. Characteristic study of flat spray nozzle by using particle image velocimetry (PIV) and ANSYS simulation method

    NASA Astrophysics Data System (ADS)

    Pairan, M. Rasidi; Asmuin, Norzelawati; Isa, Nurasikin Mat; Sies, Farid

    2017-04-01

    Water mist sprays are used in wide range of application. However it is depend to the spray characteristic to suit the particular application. This project studies the water droplet velocity and penetration angle generated by new development mist spray with a flat spray pattern. This research conducted into two part which are experimental and simulation section. The experimental was conducted by using particle image velocimetry (PIV) method, ANSYS software was used as tools for simulation section meanwhile image J software was used to measure the penetration angle. Three different of combination pressure of air and water were tested which are 1 bar (case A), 2 bar (case B) and 3 bar (case C). The flat spray generated by the new development nozzle was examined at 9cm vertical line from 8cm of the nozzle orifice. The result provided in the detailed analysis shows that the trend of graph velocity versus distance gives the good agreement within simulation and experiment for all the pressure combination. As the water and air pressure increased from 1 bar to 2 bar, the velocity and angle penetration also increased, however for case 3 which run under 3 bar condition, the water droplet velocity generated increased but the angle penetration is decreased. All the data then validated by calculate the error between experiment and simulation. By comparing the simulation data to the experiment data for all the cases, the standard deviation for this case A, case B and case C relatively small which are 5.444, 0.8242 and 6.4023.

  17. A novel breast software phantom for biomechanical modeling of elastography.

    PubMed

    Bhatti, Syeda Naema; Sridhar-Keralapura, Mallika

    2012-04-01

    In developing breast imaging technologies, testing is done with phantoms. Physical phantoms are normally used but their size, shape, composition, and detail cannot be modified readily. These difficulties can be avoided by creating a software breast phantom. Researchers have created software breast phantoms using geometric and/or mathematical methods for applications like image fusion. The authors report a 3D software breast phantom that was built using a mechanical design tool, to investigate the biomechanics of elastography using finite element modeling (FEM). The authors propose this phantom as an intermediate assessment tool for elastography simulation; for use after testing with commonly used phantoms and before clinical testing. The authors design the phantom to be flexible in both, the breast geometry and biomechanical parameters, to make it a useful tool for elastography simulation. The authors develop the 3D software phantom using a mechanical design tool based on illustrations of normal breast anatomy. The software phantom does not use geometric primitives or imaging data. The authors discuss how to create this phantom and how to modify it. The authors demonstrate a typical elastography experiment of applying a static stress to the top surface of the breast just above a simulated tumor and calculate normal strains in 3D and in 2D with plane strain approximations with linear solvers. In particular, they investigate contrast transfer efficiency (CTE) by designing a parametric study based on location, shape, and stiffness of simulated tumors. The authors also compare their findings to a commonly used elastography phantom. The 3D breast software phantom is flexible in shape, size, and location of tumors, glandular to fatty content, and the ductal structure. Residual modulus, maps, and profiles, served as a guide to optimize meshing of this geometrically nonlinear phantom for biomechanical modeling of elastography. At best, low residues (around 1-5 KPa) were found within the phantom while errors were elevated (around 10-30 KPa) at tumor and lobule boundaries. From our FEM analysis, the breast phantom generated a superior CTE in both 2D and in 3D over the block phantom. It also showed differences in CTE values and strain contrast for deep and shallow tumors and showed significant change in CTE when 3D modeling was used. These changes were not significant in the block phantom. Both phantoms, however, showed worsened CTE values for increased input tumor-background modulus contrast. Block phantoms serve as a starting tool but a next level phantom, like the proposed breast phantom, will serve as a valuable intermediate for elastography simulation before clinical testing. Further, given the CTE metrics for the breast phantom are superior to the block phantom, and vary for tumor shape, location, and stiffness, these phantoms would enhance the study of elastography contrast. Further, the use of 2D phantoms with plane strain approximations overestimates the CTE value when compared to the true CTE achieved with 3D models. Thus, the use of 3D phantoms, like the breast phantom, with no approximations, will assist in more accurate estimation of modulus, especially valuable for 3D elastography systems.

  18. Block matching and Wiener filtering approach to optical turbulence mitigation and its application to simulated and real imagery with quantitative error analysis

    NASA Astrophysics Data System (ADS)

    Hardie, Russell C.; Rucci, Michael A.; Dapore, Alexander J.; Karch, Barry K.

    2017-07-01

    We present a block-matching and Wiener filtering approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the authors. These data provide objective truth and allow for quantitative error analysis. The proposed turbulence mitigation method takes a sequence of short-exposure frames of a static scene and outputs a single restored image. A block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged, and the average image is processed with a Wiener filter to provide deconvolution. An important aspect of the proposed method lies in how we model the degradation point spread function (PSF) for the purposes of Wiener filtering. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the estimated motion vectors. We provide a detailed performance analysis that illustrates how the key tuning parameters impact system performance. The proposed method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study.

  19. STEM_CELL: a software tool for electron microscopy: part 2--analysis of crystalline materials.

    PubMed

    Grillo, Vincenzo; Rossi, Francesca

    2013-02-01

    A new graphical software (STEM_CELL) for analysis of HRTEM and STEM-HAADF images is here introduced in detail. The advantage of the software, beyond its graphic interface, is to put together different analysis algorithms and simulation (described in an associated article) to produce novel analysis methodologies. Different implementations and improvements to state of the art approach are reported in the image analysis, filtering, normalization, background subtraction. In particular two important methodological results are here highlighted: (i) the definition of a procedure for atomic scale quantitative analysis of HAADF images, (ii) the extension of geometric phase analysis to large regions up to potentially 1μm through the use of under sampled images with aliasing effects. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Visualization of terahertz surface waves propagation on metal foils

    PubMed Central

    Wang, Xinke; Wang, Sen; Sun, Wenfeng; Feng, Shengfei; Han, Peng; Yan, Haitao; Ye, Jiasheng; Zhang, Yan

    2016-01-01

    Exploitation of surface plasmonic devices (SPDs) in the terahertz (THz) band is always beneficial for broadening the application potential of THz technologies. To clarify features of SPDs, a practical characterization means is essential for accurately observing the complex field distribution of a THz surface wave (TSW). Here, a THz digital holographic imaging system is employed to coherently exhibit temporal variations and spectral properties of TSWs activated by a rectangular or semicircular slit structure on metal foils. Advantages of the imaging system are comprehensively elucidated, including the exclusive measurement of TSWs and fall-off of the time consumption. Numerical simulations of experimental procedures further verify the imaging measurement accuracy. It can be anticipated that this imaging system will provide a versatile tool for analyzing the performance and principle of SPDs. PMID:26729652

  1. Simultaneous stimulated Raman scattering and higher harmonic generation imaging for liver disease diagnosis without labeling

    NASA Astrophysics Data System (ADS)

    Lin, Jian; Wang, Zi; Zheng, Wei; Huang, Zhiwei

    2014-02-01

    Nonlinear optical microscopy (e.g., higher harmonic (second-/third- harmonic) generation (HHG), simulated Raman scattering (SRS)) has high diagnostic sensitivity and chemical specificity, making it a promising tool for label-free tissue and cell imaging. In this work, we report a development of a simultaneous SRS and HHG imaging technique for characterization of liver disease in a bile-duct-ligation rat-modal. HHG visualizes collagens formation and reveals the cell morphologic changes associated with liver fibrosis; whereas SRS identifies the distributions of hepatic fat cells formed in steatosis liver tissue. This work shows that the co-registration of SRS and HHG images can be an effective means for label-free diagnosis and characterization of liver steatosis/fibrosis at the cellular and molecular levels.

  2. Hires and beyond

    NASA Technical Reports Server (NTRS)

    Fowler, John W.; Aumann, H. H.

    1994-01-01

    The High-Resolution image construction program (HiRes) used at IPAC is based on the Maximum Correlation Method. After HiRes intensity images are constructed from IRAS data, additional images are needed to aid in scientific interpretation. Some of the images that are available for this purpose show the fitting noise, estimates of the achieved resolution, and detector track maps. Two methods have been developed for creating color maps without discarding any more spatial information than absolutely necessary: the 'cross-band simulation' and 'prior-knowledge' methods. These maps are demonstrated using the survey observations of a 2 x 2 degree field centered on M31. Prior knowledge may also be used to achieve super-resolution and to suppress ringing around bright point sources observed against background emission. Tools to suppress noise spikes and for accelerating convergence are also described.

  3. Application of dynamic Monte Carlo technique in proton beam radiotherapy using Geant4 simulation toolkit

    NASA Astrophysics Data System (ADS)

    Guan, Fada

    Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the problems with fixed physics and geometry settings. Proton therapy is a dynamic treatment technique in the clinical application. In this research, we developed a method to perform the dynamic Monte Carlo simulation of proton therapy using Geant4 simulation toolkit. A passive-scattering treatment nozzle equipped with a rotating range modulation wheel was modeled in this research. One important application of the Monte Carlo simulation is to predict the spatial dose distribution in the target geometry. For simplification, a mathematical model of a human body is usually used as the target, but only the average dose over the whole organ or tissue can be obtained rather than the accurate spatial dose distribution. In this research, we developed a method using MATLAB to convert the medical images of a patient from CT scanning into the patient voxel geometry. Hence, if the patient voxel geometry is used as the target in the Monte Carlo simulation, the accurate spatial dose distribution in the target can be obtained. A data analysis tool---root was used to score the simulation results during a Geant4 simulation and to analyze the data and plot results after simulation. Finally, we successfully obtained the accurate spatial dose distribution in part of a human body after treating a patient with prostate cancer using proton therapy.

  4. Full 3-D OCT-based pseudophakic custom computer eye model

    PubMed Central

    Sun, M.; Pérez-Merino, P.; Martinez-Enriquez, E.; Velasco-Ocana, M.; Marcos, S.

    2016-01-01

    We compared measured wave aberrations in pseudophakic eyes implanted with aspheric intraocular lenses (IOLs) with simulated aberrations from numerical ray tracing on customized computer eye models, built using quantitative 3-D OCT-based patient-specific ocular geometry. Experimental and simulated aberrations show high correlation (R = 0.93; p<0.0001) and similarity (RMS for high order aberrations discrepancies within 23.58%). This study shows that full OCT-based pseudophakic custom computer eye models allow understanding the relative contribution of optical geometrical and surgically-related factors to image quality, and are an excellent tool for characterizing and improving cataract surgery. PMID:27231608

  5. Conversion of NIMROD simulation results for graphical analysis using VisIt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero-Talamas, C A

    Software routines developed to prepare NIMROD [C. R. Sovinec et al., J. Comp. Phys. 195, 355 (2004)] results for three-dimensional visualization from simulations of the Sustained Spheromak Physics Experiment (SSPX ) [E. B. Hooper et al., Nucl. Fusion 39, 863 (1999)] are presented here. The visualization is done by first converting the NIMROD output to a format known as legacy VTK and then loading it to VisIt, a graphical analysis tool that includes three-dimensional rendering and various mathematical operations for large data sets. Sample images obtained from the processing of NIMROD data with VisIt are included.

  6. Carbon contamination topography analysis of EUV masks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Y.-J.; Yankulin, L.; Thomas, P.

    2010-03-12

    The impact of carbon contamination on extreme ultraviolet (EUV) masks is significant due to throughput loss and potential effects on imaging performance. Current carbon contamination research primarily focuses on the lifetime of the multilayer surfaces, determined by reflectivity loss and reduced throughput in EUV exposure tools. However, contamination on patterned EUV masks can cause additional effects on absorbing features and the printed images, as well as impacting the efficiency of cleaning process. In this work, several different techniques were used to determine possible contamination topography. Lithographic simulations were also performed and the results compared with the experimental data.

  7. A Novel Imaging Technique (X-Map) to Identify Acute Ischemic Lesions Using Noncontrast Dual-Energy Computed Tomography.

    PubMed

    Noguchi, Kyo; Itoh, Toshihide; Naruto, Norihito; Takashima, Shutaro; Tanaka, Kortaro; Kuroda, Satoshi

    2017-01-01

    We evaluated whether X-map, a novel imaging technique, can visualize ischemic lesions within 20 hours after the onset in patients with acute ischemic stroke, using noncontrast dual-energy computed tomography (DECT). Six patients with acute ischemic stroke were included in this study. Noncontrast head DECT scans were acquired with 2 X-ray tubes operated at 80 kV and Sn150 kV between 32 minutes and 20 hours after the onset. Using these DECT scans, the X-map was reconstructed based on 3-material decomposition and compared with a simulated standard (120 kV) computed tomography (CT) and diffusion-weighted imaging (DWI). The X-map showed more sensitivity to identify the lesions as an area of lower attenuation value than a simulated standard CT in all 6 patients. The lesions on the X-map correlated well with those on DWI. In 3 of 6 patients, the X-map detected a transient decrease in the attenuation value in the peri-infarct area within 1 day after the onset. The X-map is a powerful tool to supplement a simulated standard CT and characterize acute ischemic lesions. However, the X-map cannot replace a simulated standard CT to diagnose acute cerebral infarction. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Images in Transition. Proceedings of the Annual Society for the Advancement of Gifted Education (SAGE) Conference (3rd, Calgary, Alberta, Canada, September 24-26, 1992) and the Canadian Symposium on Gifted Education (6th).

    ERIC Educational Resources Information Center

    Calgary Univ. (Alberta). Centre for Gifted Education.

    This document presents the conference proceedings of the primary stakeholders in gifted education in Alberta (Canada): "Activities in Math for the Gifted Student" (Ballheim); "The Self Awareness Growth Experiences Approach" (Balogun); "Computer Simulations: An Integrating Tool" (Bilan); "The Portrayal of Gifted…

  9. Volumetric neuroimage analysis extensions for the MIPAV software package.

    PubMed

    Bazin, Pierre-Louis; Cuzzocreo, Jennifer L; Yassa, Michael A; Gandler, William; McAuliffe, Matthew J; Bassett, Susan S; Pham, Dzung L

    2007-09-15

    We describe a new collection of publicly available software tools for performing quantitative neuroimage analysis. The tools perform semi-automatic brain extraction, tissue classification, Talairach alignment, and atlas-based measurements within a user-friendly graphical environment. They are implemented as plug-ins for MIPAV, a freely available medical image processing software package from the National Institutes of Health. Because the plug-ins and MIPAV are implemented in Java, both can be utilized on nearly any operating system platform. In addition to the software plug-ins, we have also released a digital version of the Talairach atlas that can be used to perform regional volumetric analyses. Several studies are conducted applying the new tools to simulated and real neuroimaging data sets.

  10. The Abundance of Large Arcs From CLASH

    NASA Astrophysics Data System (ADS)

    Xu, Bingxiao; Postman, Marc; Meneghetti, Massimo; Coe, Dan A.; Clash Team

    2015-01-01

    We have developed an automated arc-finding algorithm to perform a rigorous comparison of the observed and simulated abundance of large lensed background galaxies (a.k.a arcs). We use images from the CLASH program to derive our observed arc abundance. Simulated CLASH images are created by performing ray tracing through mock clusters generated by the N-body simulation calibrated tool -- MOKA, and N-body/hydrodynamic simulations -- MUSIC, over the same mass and redshift range as the CLASH X-ray selected sample. We derive a lensing efficiency of 15 ± 3 arcs per cluster for the X-ray selected CLASH sample and 4 ± 2 arcs per cluster for the simulated sample. The marginally significant difference (3.0 σ) between the results for the observations and the simulations can be explained by the systematically smaller area with magnification larger than 3 (by a factor of ˜4) in both MOKA and MUSIC mass models relative to those derived from the CLASH data. Accounting for this difference brings the observed and simulated arc statistics into full agreement. We find that the source redshift distribution does not have big impact on the arc abundance but the arc abundance is very sensitive to the concentration of the dark matter halos. Our results suggest that the solution to the "arc statistics problem" lies primarily in matching the cluster dark matter distribution.

  11. SENSOR: a tool for the simulation of hyperspectral remote sensing systems

    NASA Astrophysics Data System (ADS)

    Börner, Anko; Wiest, Lorenz; Keller, Peter; Reulke, Ralf; Richter, Rolf; Schaepman, Michael; Schläpfer, Daniel

    The consistent end-to-end simulation of airborne and spaceborne earth remote sensing systems is an important task, and sometimes the only way for the adaptation and optimisation of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system. The presented software simulator SENSOR (Software Environment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between. The simulator consists of three parts. The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray-tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account. The third part consists of an optical and an electronic sensor model for the generation of digital images. Using SENSOR for an optimisation requires the additional application of task-specific data processing algorithms. The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and first examples of its use are given. The verification of SENSOR is demonstrated. This work is closely related to the Airborne PRISM Experiment (APEX), an airborne imaging spectrometer funded by the European Space Agency.

  12. A novel scatter separation method for multi-energy x-ray imaging

    NASA Astrophysics Data System (ADS)

    Sossin, A.; Rebuffel, V.; Tabary, J.; Létang, J. M.; Freud, N.; Verger, L.

    2016-06-01

    X-ray imaging coupled with recently emerged energy-resolved photon counting detectors provides the ability to differentiate material components and to estimate their respective thicknesses. However, such techniques require highly accurate images. The presence of scattered radiation leads to a loss of spatial contrast and, more importantly, a bias in radiographic material imaging and artefacts in computed tomography (CT). The aim of the present study was to introduce and evaluate a partial attenuation spectral scatter separation approach (PASSSA) adapted for multi-energy imaging. This evaluation was carried out with the aid of numerical simulations provided by an internal simulation tool, Sindbad-SFFD. A simplified numerical thorax phantom placed in a CT geometry was used. The attenuation images and CT slices obtained from corrected data showed a remarkable increase in local contrast and internal structure detectability when compared to uncorrected images. Scatter induced bias was also substantially decreased. In terms of quantitative performance, the developed approach proved to be quite accurate as well. The average normalized root-mean-square error between the uncorrected projections and the reference primary projections was around 23%. The application of PASSSA reduced this error to around 5%. Finally, in terms of voxel value accuracy, an increase by a factor  >10 was observed for most inspected volumes-of-interest, when comparing the corrected and uncorrected total volumes.

  13. The effect of defect cluster size and interpolation on radiographic image quality

    NASA Astrophysics Data System (ADS)

    Töpfer, Karin; Yip, Kwok L.

    2011-03-01

    For digital X-ray detectors, the need to control factory yield and cost invariably leads to the presence of some defective pixels. Recently, a standard procedure was developed to identify such pixels for industrial applications. However, no quality standards exist in medical or industrial imaging regarding the maximum allowable number and size of detector defects. While the answer may be application specific, the minimum requirement for any defect specification is that the diagnostic quality of the images be maintained. A more stringent criterion is to keep any changes in the images due to defects below the visual threshold. Two highly sensitive image simulation and evaluation methods were employed to specify the fraction of allowable defects as a function of defect cluster size in general radiography. First, the most critical situation of the defect being located in the center of the disease feature was explored using image simulation tools and a previously verified human observer model, incorporating a channelized Hotelling observer. Detectability index d' was obtained as a function of defect cluster size for three different disease features on clinical lung and extremity backgrounds. Second, four concentrations of defects of four different sizes were added to clinical images with subtle disease features and then interpolated. Twenty observers evaluated the images against the original on a single display using a 2-AFC method, which was highly sensitive to small changes in image detail. Based on a 50% just-noticeable difference, the fraction of allowed defects was specified vs. cluster size.

  14. Development of an electromagnetic imaging system for well bore integrity inspection

    NASA Astrophysics Data System (ADS)

    Plotnikov, Yuri; Wheeler, Frederick W.; Mandal, Sudeep; Climent, Helene C.; Kasten, A. Matthias; Ross, William

    2017-02-01

    State-of-the-art imaging technologies for monitoring the integrity of oil and gas well bores are typically limited to the inspection of metal casings and cement bond interfaces close to the first casing region. The objective of this study is to develop and evaluate a novel well-integrity inspection system that is capable of providing enhanced information about the flaw structure and topology of hydrocarbon producing well bores. In order to achieve this, we propose the development of a multi-element electromagnetic (EM) inspection tool that can provide information about material loss in the first and second casing structure as well as information about eccentricity between multiple casing strings. Furthermore, the information gathered from the EM inspection tool will be combined with other imaging modalities (e.g. data from an x-ray backscatter imaging device). The independently acquired data are then fused to achieve a comprehensive assessment of integrity with greater accuracy. A test rig composed of several concentric metal casings with various defect structures was assembled and imaged. Initial test results were obtained with a scanning system design that includes a single transmitting coil and several receiving coils mounted on a single rod. A mechanical linear translation stage was used to move the EM sensors in the axial direction during data acquisition. For simplicity, a single receiving coil and repetitive scans were employed to simulate performance of the designed receiving sensor array system. The resulting electromagnetic images enable the detection of the metal defects in the steel pipes. Responses from several sensors were used to assess the location and amount of material loss in the first and second metal pipe as well as the relative eccentric position between these two pipes. The results from EM measurements and x-ray backscatter simulations demonstrate that data fusion from several sensing modalities can provide an enhanced assessment of flaw structures in producing well bores and potentially allow for early detection of anomalies that if undetected might lead to catastrophic failures.

  15. The transesophageal echocardiography simulator based on computed tomography images.

    PubMed

    Piórkowski, Adam; Kempny, Aleksander

    2013-02-01

    Simulators are a new tool in education in many fields, including medicine, where they greatly improve familiarity with medical procedures, reduce costs, and, importantly, cause no harm to patients. This is so in the case of transesophageal echocardiography (TEE), in which the use of a simulator facilitates spatial orientation and helps in case studies. The aim of the project described in this paper is to simulate an examination by TEE. This research makes use of available computed tomography data to simulate the corresponding echocardiographic view. This paper describes the essential characteristics that distinguish these two modalities and the key principles of the wave phenomena that should be considered in the simulation process, taking into account the conditions specific to the echocardiography. The construction of the CT2TEE (Web-based TEE simulator) is also presented. The considerations include ray-tracing and ray-casting techniques in the context of ultrasound beam and artifact simulation. An important aspect of the interaction with the user is raised.

  16. π Scope: python based scientific workbench with visualization tool for MDSplus data

    NASA Astrophysics Data System (ADS)

    Shiraiwa, S.

    2014-10-01

    π Scope is a python based scientific data analysis and visualization tool constructed on wxPython and Matplotlib. Although it is designed to be a generic tool, the primary motivation for developing the new software is 1) to provide an updated tool to browse MDSplus data, with functionalities beyond dwscope and jScope, and 2) to provide a universal foundation to construct interface tools to perform computer simulation and modeling for Alcator C-Mod. It provides many features to visualize MDSplus data during tokamak experiments including overplotting different signals and discharges, various plot types (line, contour, image, etc.), in-panel data analysis using python scripts, and publication quality graphics generation. Additionally, the logic to produce multi-panel plots is designed to be backward compatible with dwscope, enabling smooth migration for dwscope users. πScope uses multi-threading to reduce data transfer latency, and its object-oriented design makes it easy to modify and expand while the open source nature allows portability. A built-in tree data browser allows a user to approach the data structure both from a GUI and a script, enabling relatively complex data analysis workflow to be built quickly. As an example, an IDL-based interface to perform GENRAY/CQL3D simulations was ported on πScope, thus allowing LHCD simulation to be run between-shot using C-Mod experimental profiles. This workflow is being used to generate a large database to develop a LHCD actuator model for the plasma control system. Supported by USDoE Award DE-FC02-99ER54512.

  17. The effect of a simulation training package on skill acquisition for duplex arterial stenosis detection.

    PubMed

    Jaffer, Usman; Normahani, Pasha; Singh, Prashant; Aslam, Mohammed; Standfield, Nigel J

    2015-01-01

    In vascular surgery, duplex ultrasonography is a valuable diagnostic tool in patients with peripheral vascular disease, and there is increasing demand for vascular surgeons to be able to perform duplex scanning. This study evaluates the role of a novel simulation training package on vascular ultrasound (US) skill acquisition. A total of 19 novices measured predefined stenosis in a simulated pulsatile vessel using both peak systolic velocity ratio (PSVR) and diameter reduction (DR) methods before and after a short period of training using a simulated training package. The training package consisted of a simulated pulsatile vessel phantom, a set of instructional videos, duplex ultrasound objective structured assessment of technical skills (DUOSATS) tool, and a portable US scanner. Quantitative metrics (procedure time, percentage error using PSVR and DR methods, DUOSAT scores, and global rating scores) before and after training were compared. Subjects spent a median time of 144 mins (IQR: 60-195) training using the simulation package. Subjects exhibited statistically significant improvements when comparing pretraining and posttraining DUOSAT scores (pretraining = 17 [16-19.3] vs posttraining = 30 [27.8-31.8]; p < 0.01), global rating score (pretraining = 1 [1-2] vs posttraining = 4 [3.8-4]; p < 0.01), percentage error using both the DR (pretraining = 12.6% [9-29.6] vs posttraining = 10.3% [8.9-11.1]; p = 0.03) and PSVR (pretraining = 60% [40-60] vs posttraining = 20% [6.7-20]; p < 0.01) methods. In this study, subjects with no previous practical US experience developed the ability to both acquire and interpret arterial duplex images in a pulsatile simulated phantom following a short period of goal direct training using a simulation training package. A simulation training package may be a valuable tool for integration into a vascular training program. However, further work is needed to explore whether these newly attained skills are translated into clinical assessment. Crown Copyright © 2014. Published by Elsevier Inc. All rights reserved.

  18. Generating classes of 3D virtual mandibles for AR-based medical simulation.

    PubMed

    Hippalgaonkar, Neha R; Sider, Alexa D; Hamza-Lup, Felix G; Santhanam, Anand P; Jaganathan, Bala; Imielinska, Celina; Rolland, Jannick P

    2008-01-01

    Simulation and modeling represent promising tools for several application domains from engineering to forensic science and medicine. Advances in 3D imaging technology convey paradigms such as augmented reality (AR) and mixed reality inside promising simulation tools for the training industry. Motivated by the requirement for superimposing anatomically correct 3D models on a human patient simulator (HPS) and visualizing them in an AR environment, the purpose of this research effort was to develop and validate a method for scaling a source human mandible to a target human mandible within a 2 mm root mean square (RMS) error. Results show that, given a distance between 2 same landmarks on 2 different mandibles, a relative scaling factor may be computed. Using this scaling factor, results show that a 3D virtual mandible model can be made morphometrically equivalent to a real target-specific mandible within a 1.30 mm RMS error. The virtual mandible may be further used as a reference target for registering other anatomic models, such as the lungs, on the HPS. Such registration will be made possible by physical constraints among the mandible and the spinal column in the horizontal normal rest position.

  19. An efficient 3-D eddy-current solver using an independent impedance method for transcranial magnetic stimulation.

    PubMed

    De Geeter, Nele; Crevecoeur, Guillaume; Dupre, Luc

    2011-02-01

    In many important bioelectromagnetic problem settings, eddy-current simulations are required. Examples are the reduction of eddy-current artifacts in magnetic resonance imaging and techniques, whereby the eddy currents interact with the biological system, like the alteration of the neurophysiology due to transcranial magnetic stimulation (TMS). TMS has become an important tool for the diagnosis and treatment of neurological diseases and psychiatric disorders. A widely applied method for simulating the eddy currents is the impedance method (IM). However, this method has to contend with an ill conditioned problem and consequently a long convergence time. When dealing with optimal design problems and sensitivity control, the convergence rate becomes even more crucial since the eddy-current solver needs to be evaluated in an iterative loop. Therefore, we introduce an independent IM (IIM), which improves the conditionality and speeds up the numerical convergence. This paper shows how IIM is based on IM and what are the advantages. Moreover, the method is applied to the efficient simulation of TMS. The proposed IIM achieves superior convergence properties with high time efficiency, compared to the traditional IM and is therefore a useful tool for accurate and fast TMS simulations.

  20. Correlation Characterization of Particles in Volume Based on Peak-to-Basement Ratio

    PubMed Central

    Vovk, Tatiana A.; Petrov, Nikolay V.

    2017-01-01

    We propose a new express method of the correlation characterization of the particles suspended in the volume of optically transparent medium. It utilizes inline digital holography technique for obtaining two images of the adjacent layers from the investigated volume with subsequent matching of the cross-correlation function peak-to-basement ratio calculated for these images. After preliminary calibration via numerical simulation, the proposed method allows one to quickly distinguish parameters of the particle distribution and evaluate their concentration. The experimental verification was carried out for the two types of physical suspensions. Our method can be applied in environmental and biological research, which includes analyzing tools in flow cytometry devices, express characterization of particles and biological cells in air and water media, and various technical tasks, e.g. the study of scattering objects or rapid determination of cutting tool conditions in mechanisms. PMID:28252020

  1. Open-source framework for documentation of scientific software written on MATLAB-compatible programming languages

    NASA Astrophysics Data System (ADS)

    Konnik, Mikhail V.; Welsh, James

    2012-09-01

    Numerical simulators for adaptive optics systems have become an essential tool for the research and development of the future advanced astronomical instruments. However, growing software code of the numerical simulator makes it difficult to continue to support the code itself. The problem of adequate documentation of the astronomical software for adaptive optics simulators may complicate the development since the documentation must contain up-to-date schemes and mathematical descriptions implemented in the software code. Although most modern programming environments like MATLAB or Octave have in-built documentation abilities, they are often insufficient for the description of a typical adaptive optics simulator code. This paper describes a general cross-platform framework for the documentation of scientific software using open-source tools such as LATEX, mercurial, Doxygen, and Perl. Using the Perl script that translates M-files MATLAB comments into C-like, one can use Doxygen to generate and update the documentation for the scientific source code. The documentation generated by this framework contains the current code description with mathematical formulas, images, and bibliographical references. A detailed description of the framework components is presented as well as the guidelines for the framework deployment. Examples of the code documentation for the scripts and functions of a MATLAB-based adaptive optics simulator are provided.

  2. Colonoscopy tutorial software made with a cadaver's sectioned images.

    PubMed

    Chung, Beom Sun; Chung, Min Suk; Park, Hyung Seon; Shin, Byeong-Seok; Kwon, Koojoo

    2016-11-01

    Novice doctors may watch tutorial videos in training for actual or computed tomographic (CT) colonoscopy. The conventional learning videos can be complemented by virtual colonoscopy software made with a cadaver's sectioned images (SIs). The objective of this study was to assist colonoscopy trainees with the new interactive software. Submucosal segmentation on the SIs was carried out through the whole length of the large intestine. With the SIs and segmented images, a three dimensional model was reconstructed. Six-hundred seventy-one proximal colonoscopic views (conventional views) and corresponding distal colonoscopic views (simulating the retroflexion of a colonoscope) were produced. Not only navigation views showing the current location of the colonoscope tip and its course, but also, supplementary description views were elaborated. The four corresponding views were put into convenient browsing software to be downloaded free from the homepage (anatomy.co.kr). The SI colonoscopy software with the realistic images and supportive tools was available to anybody. Users could readily notice the position and direction of the virtual colonoscope tip and recognize meaningful structures in colonoscopic views. The software is expected to be an auxiliary learning tool to improve technique and related knowledge in actual and CT colonoscopies. Hopefully, the software will be updated using raw images from the Visible Korean project. Copyright © 2016 Elsevier GmbH. All rights reserved.

  3. Balancing Science Objectives and Operational Constraints: A Mission Planner's Challenge

    NASA Technical Reports Server (NTRS)

    Weldy, Michelle

    1996-01-01

    The Air Force minute sensor technology integration (MSTI-3) satellite's primary mission is to characterize Earth's atmospheric background clutter. MSTI-3 will use three cameras for data collection, a mid-wave infrared imager, a short wave infrared imager, and a visible imaging spectrometer. Mission science objectives call for the collection of over 2 million images within the one year mission life. In addition, operational constraints limit camera usage to four operations of twenty minutes per day, with no more than 10,000 data and calibrating images collected per day. To balance the operational constraints and science objectives, the mission planning team has designed a planning process to e event schedules and sensor operation timelines. Each set of constraints, including spacecraft performance capabilities, the camera filters, the geographical regions, and the spacecraft-Sun-Earth geometries of interest, and remote tracking station deconflictions has been accounted for in this methodology. To aid in this process, the mission planning team is building a series of tools from commercial off-the-shelf software. These include the mission manifest which builds a daily schedule of events, and the MSTI Scene Simulator which helps build geometrically correct scans. These tools provide an efficient, responsive, and highly flexible architecture that maximizes data collection while minimizing mission planning time.

  4. Bending the Rules: Widefield Microscopy and the Abbe Limit of Resolution

    PubMed Central

    Verdaasdonk, Jolien S.; Stephens, Andrew D.; Haase, Julian; Bloom, Kerry

    2014-01-01

    One of the most fundamental concepts of microscopy is that of resolution–the ability to clearly distinguish two objects as separate. Recent advances such as structured illumination microscopy (SIM) and point localization techniques including photoactivated localization microscopy (PALM), and stochastic optical reconstruction microscopy (STORM) strive to overcome the inherent limits of resolution of the modern light microscope. These techniques, however, are not always feasible or optimal for live cell imaging. Thus, in this review, we explore three techniques for extracting high resolution data from images acquired on a widefield microscope–deconvolution, model convolution, and Gaussian fitting. Deconvolution is a powerful tool for restoring a blurred image using knowledge of the point spread function (PSF) describing the blurring of light by the microscope, although care must be taken to ensure accuracy of subsequent quantitative analysis. The process of model convolution also requires knowledge of the PSF to blur a simulated image which can then be compared to the experimentally acquired data to reach conclusions regarding its geometry and fluorophore distribution. Gaussian fitting is the basis for point localization microscopy, and can also be applied to tracking spot motion over time or measuring spot shape and size. All together, these three methods serve as powerful tools for high-resolution imaging using widefield microscopy. PMID:23893718

  5. Development of a Duplex Ultrasound Simulator and Preliminary Validation of Velocity Measurements in Carotid Artery Models.

    PubMed

    Zierler, R Eugene; Leotta, Daniel F; Sansom, Kurt; Aliseda, Alberto; Anderson, Mark D; Sheehan, Florence H

    2016-07-01

    Duplex ultrasound scanning with B-mode imaging and both color Doppler and Doppler spectral waveforms is relied upon for diagnosis of vascular pathology and selection of patients for further evaluation and treatment. In most duplex ultrasound applications, classification of disease severity is based primarily on alterations in blood flow velocities, particularly the peak systolic velocity (PSV) obtained from Doppler spectral waveforms. We developed a duplex ultrasound simulator for training and assessment of scanning skills. Duplex ultrasound cases were prepared from 2-dimensional (2D) images of normal and stenotic carotid arteries by reconstructing the common carotid, internal carotid, and external carotid arteries in 3 dimensions and computationally simulating blood flow velocity fields within the lumen. The simulator displays a 2D B-mode image corresponding to transducer position on a mannequin, overlaid by color coding of velocity data. A spectral waveform is generated according to examiner-defined settings (depth and size of the Doppler sample volume, beam steering, Doppler beam angle, and pulse repetition frequency or scale). The accuracy of the simulator was assessed by comparing the PSV measured from the spectral waveforms with the true PSV which was derived from the computational flow model based on the size and location of the sample volume within the artery. Three expert examiners made a total of 36 carotid artery PSV measurements based on the simulated cases. The PSV measured by the examiners deviated from true PSV by 8% ± 5% (N = 36). The deviation in PSV did not differ significantly between artery segments, normal and stenotic arteries, or examiners. To our knowledge, this is the first simulation of duplex ultrasound that can create and display real-time color Doppler images and Doppler spectral waveforms. The results demonstrate that an examiner can measure PSV from the spectral waveforms using the settings on the simulator with a mean absolute error in the velocity measurement of less than 10%. With the addition of cases with a range of pathologies, this duplex ultrasound simulator will be a useful tool for training health-care providers in vascular ultrasound applications and for assessing their skills in an objective and quantitative manner. © The Author(s) 2016.

  6. Quantitative assessment of image motion blur in diffraction images of moving biological cells

    NASA Astrophysics Data System (ADS)

    Wang, He; Jin, Changrong; Feng, Yuanming; Qi, Dandan; Sa, Yu; Hu, Xin-Hua

    2016-02-01

    Motion blur (MB) presents a significant challenge for obtaining high-contrast image data from biological cells with a polarization diffraction imaging flow cytometry (p-DIFC) method. A new p-DIFC experimental system has been developed to evaluate the MB and its effect on image analysis using a time-delay-integration (TDI) CCD camera. Diffraction images of MCF-7 and K562 cells have been acquired with different speed-mismatch ratios and compared to characterize MB quantitatively. Frequency analysis of the diffraction images shows that the degree of MB can be quantified by bandwidth variations of the diffraction images along the motion direction. The analytical results were confirmed by the p-DIFC image data acquired at different speed-mismatch ratios and used to validate a method of numerical simulation of MB on blur-free diffraction images, which provides a useful tool to examine the blurring effect on diffraction images acquired from the same cell. These results provide insights on the dependence of diffraction image on MB and allow significant improvement on rapid biological cell assay with the p-DIFC method.

  7. SU-E-I-80: Quantification of Respiratory and Cardiac Motion Effect in SPECT Acquisitions Using Anthropomorphic Models: A Monte Carlo Simulation Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papadimitroulas, P; Kostou, T; Kagadis, G

    Purpose: The purpose of the present study was to quantify, evaluate the impact of cardiac and respiratory motion on clinical nuclear imaging protocols. Common SPECT and scintigraphic scans are studied using Monte Carlo (MC) simulations, comparing the resulted images with and without motion. Methods: Realistic simulations were executed using the GATE toolkit and the XCAT anthropomorphic phantom as a reference model for human anatomy. Three different radiopharmaceuticals based on 99mTc were studied, namely 99mTc-MDP, 99mTc—N—DBODC and 99mTc—DTPA-aerosol for bone, myocardium and lung scanning respectively. The resolution of the phantom was set to 3.5 mm{sup 3}. The impact of the motionmore » on spatial resolution was quantified using a sphere with 3.5 mm diameter and 10 separate time frames, in the ECAM modeled SPECT scanner. Finally, respiratory motion impact on resolution and imaging of lung lesions was investigated. The MLEM algorithm was used for data reconstruction, while the literature derived biodistributions of the pharmaceuticals were used as activity maps in the simulations. Results: FWHM was extracted for a static and a moving sphere which was ∼23 cm away from the entrance of the SPECT head. The difference in the FWHM was 20% between the two simulations. Profiles in thorax were compared in the case of bone scintigraphy, showing displacement and blurring of the bones when respiratory motion was inserted in the simulation. Large discrepancies were noticed in the case of myocardium imaging when cardiac motion was incorporated during the SPECT acquisition. Finally the borders of the lungs are blurred when respiratory motion is included resulting to a dislocation of ∼2.5 cm. Conclusion: As we move to individualized imaging and therapy procedures, quantitative and qualitative imaging is of high importance in nuclear diagnosis. MC simulations combined with anthropomorphic digital phantoms can provide an accurate tool for applications like motion correction techniques’ optimization. This research has been co-funded by the European Union (European Social Fund) and Greek national resources under the framework of the ‘Archimedes III: Funding of Research Groups in TEI of Athens’ project of the ‘Education & Lifelong Learning’ Operational Programme.« less

  8. PET/CT detectability and classification of simulated pulmonary lesions using an SUV correction scheme

    NASA Astrophysics Data System (ADS)

    Morrow, Andrew N.; Matthews, Kenneth L., II; Bujenovic, Steven

    2008-03-01

    Positron emission tomography (PET) and computed tomography (CT) together are a powerful diagnostic tool, but imperfect image quality allows false positive and false negative diagnoses to be made by any observer despite experience and training. This work investigates PET acquisition mode, reconstruction method and a standard uptake value (SUV) correction scheme on the classification of lesions as benign or malignant in PET/CT images, in an anthropomorphic phantom. The scheme accounts for partial volume effect (PVE) and PET resolution. The observer draws a region of interest (ROI) around the lesion using the CT dataset. A simulated homogenous PET lesion of the same shape as the drawn ROI is blurred with the point spread function (PSF) of the PET scanner to estimate the PVE, providing a scaling factor to produce a corrected SUV. Computer simulations showed that the accuracy of the corrected PET values depends on variations in the CT-drawn boundary and the position of the lesion with respect to the PET image matrix, especially for smaller lesions. Correction accuracy was affected slightly by mismatch of the simulation PSF and the actual scanner PSF. The receiver operating characteristic (ROC) study resulted in several observations. Using observer drawn ROIs, scaled tumor-background ratios (TBRs) more accurately represented actual TBRs than unscaled TBRs. For the PET images, 3D OSEM outperformed 2D OSEM, 3D OSEM outperformed 3D FBP, and 2D OSEM outperformed 2D FBP. The correction scheme significantly increased sensitivity and slightly increased accuracy for all acquisition and reconstruction modes at the cost of a small decrease in specificity.

  9. Lithographic image simulation for the 21st century with 19th-century tools

    NASA Astrophysics Data System (ADS)

    Gordon, Ronald L.; Rosenbluth, Alan E.

    2004-01-01

    Simulation of lithographic processes in semiconductor manufacturing has gone from a crude learning tool 20 years ago to a critical part of yield enhancement strategy today. Although many disparate models, championed by equally disparate communities, exist to describe various photoresist development phenomena, these communities would all agree that the one piece of the simulation picture that can, and must, be computed accurately is the image intensity in the photoresist. The imaging of a photomask onto a thin-film stack is one of the only phenomena in the lithographic process that is described fully by well-known, definitive physical laws. Although many approximations are made in the derivation of the Fourier transform relations between the mask object, the pupil, and the image, these and their impacts are well-understood and need little further investigation. The imaging process in optical lithography is modeled as a partially-coherent, Kohler illumination system. As Hopkins has shown, we can separate the computation into 2 pieces: one that takes information about the illumination source, the projection lens pupil, the resist stack, and the mask size or pitch, and the other that only needs the details of the mask structure. As the latter piece of the calculation can be expressed as a fast Fourier transform, it is the first piece that dominates. This piece involves computation of a potentially large number of numbers called Transmission Cross-Coefficients (TCCs), which are correlations of the pupil function weighted with the illumination intensity distribution. The advantage of performing the image calculations this way is that the computation of these TCCs represents an up-front cost, not to be repeated if one is only interested in changing the mask features, which is the case in Model-Based Optical Proximity Correction (MBOPC). The down side, however, is that the number of these expensive double integrals that must be performed increases as the square of the mask unit cell area; this number can cause even the fastest computers to balk if one needs to study medium- or long-range effects. One can reduce this computational burden by approximating with a smaller area, but accuracy is usually a concern, especially when building a model that will purportedly represent a manufacturing process. This work will review the current methodologies used to simulate the intensity distribution in air above the resist and address the above problems. More to the point, a methodology has been developed to eliminate the expensive numerical integrations in the TCC calculations, as the resulting integrals in many cases of interest can be either evaluated analytically, or replaced by analytical functions accurate to within machine precision. With the burden of computing these numbers lightened, more accurate representations of the image field can be realized, and better overall models are then possible.

  10. Computational Simulation of Breast Compression Based on Segmented Breast and Fibroglandular Tissues on Magnetic Resonance Images

    PubMed Central

    Shih, Tzu-Ching; Chen, Jeon-Hor; Liu, Dongxu; Nie, Ke; Sun, Lizhi; Lin, Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying

    2010-01-01

    This study presents a finite element based computational model to simulate the three-dimensional deformation of the breast and the fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and the craniocaudal and mediolateral oblique compression as used in mammography was applied. The geometry of whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo® 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the non-linear elastic tissue deformation under compression, using the MSC.Marc® software package. The model was tested in 4 cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these 4 cases at 60% compression ratio was in the range of 5-7 cm, which is the typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at 60% compression ratio was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on MRI, which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density measurements needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities – such as MRI, mammography, whole breast ultrasound, and molecular imaging – that are performed using different body positions and different compression conditions. PMID:20601773

  11. A study and simulation of the impact of high-order aberrations to overlay error distribution

    NASA Astrophysics Data System (ADS)

    Sun, G.; Wang, F.; Zhou, C.

    2011-03-01

    With reduction of design rules, a number of corresponding new technologies, such as i-HOPC, HOWA and DBO have been proposed and applied to eliminate overlay error. When these technologies are in use, any high-order error distribution needs to be clearly distinguished in order to remove the underlying causes. Lens aberrations are normally thought to mainly impact the Matching Machine Overlay (MMO). However, when using Image-Based overlay (IBO) measurement tools, aberrations become the dominant influence on single machine overlay (SMO) and even on stage repeatability performance. In this paper, several measurements of the error distributions of the lens of SMEE SSB600/10 prototype exposure tool are presented. Models that characterize the primary influence from lens magnification, high order distortion, coma aberration and telecentricity are shown. The contribution to stage repeatability (as measured with IBO tools) from the above errors was predicted with simulator and compared to experiments. Finally, the drift of every lens distortion that impact to SMO over several days was monitored and matched with the result of measurements.

  12. Experimental application of simulation tools for evaluating UAV video change detection

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Bartelsen, Jan

    2015-10-01

    Change detection is one of the most important tasks when unmanned aerial vehicles (UAV) are used for video reconnaissance and surveillance. In this paper, we address changes on short time scale, i.e. the observations are taken within time distances of a few hours. Each observation is a short video sequence corresponding to the near-nadir overflight of the UAV above the interesting area and the relevant changes are e.g. recently added or removed objects. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are versatile objects like trees and compression or transmission artifacts. To enable the usage of an automatic change detection within an interactive workflow of an UAV video exploitation system, an evaluation and assessment procedure has to be performed. Large video data sets which contain many relevant objects with varying scene background and altering influence parameters (e.g. image quality, sensor and flight parameters) including image metadata and ground truth data are necessary for a comprehensive evaluation. Since the acquisition of real video data is limited by cost and time constraints, from our point of view, the generation of synthetic data by simulation tools has to be considered. In this paper the processing chain of Saur et al. (2014) [1] and the interactive workflow for video change detection is described. We have selected the commercial simulation environment Virtual Battle Space 3 (VBS3) to generate synthetic data. For an experimental setup, an example scenario "road monitoring" has been defined and several video clips have been produced with varying flight and sensor parameters and varying objects in the scene. Image registration and change mask extraction, both components of the processing chain, are applied to corresponding frames of different video clips. For the selected examples, the images could be registered, the modelled changes could be extracted and the artifacts of the image rendering considered as noise (slight differences of heading angles, disparity of vegetation, 3D parallax) could be suppressed. We conclude that these image data could be considered to be realistic enough to serve as evaluation data for the selected processing components. Future work will extend the evaluation to other influence parameters and may include the human operator for mission planning and sensor control.

  13. Quality Improvement With Discrete Event Simulation: A Primer for Radiologists.

    PubMed

    Booker, Michael T; O'Connell, Ryan J; Desai, Bhushan; Duddalwar, Vinay A

    2016-04-01

    The application of simulation software in health care has transformed quality and process improvement. Specifically, software based on discrete-event simulation (DES) has shown the ability to improve radiology workflows and systems. Nevertheless, despite the successful application of DES in the medical literature, the power and value of simulation remains underutilized. For this reason, the basics of DES modeling are introduced, with specific attention to medical imaging. In an effort to provide readers with the tools necessary to begin their own DES analyses, the practical steps of choosing a software package and building a basic radiology model are discussed. In addition, three radiology system examples are presented, with accompanying DES models that assist in analysis and decision making. Through these simulations, we provide readers with an understanding of the theory, requirements, and benefits of implementing DES in their own radiology practices. Copyright © 2016 American College of Radiology. All rights reserved.

  14. Toward high-speed 3D nonlinear soft tissue deformation simulations using Abaqus software.

    PubMed

    Idkaidek, Ashraf; Jasiuk, Iwona

    2015-12-01

    We aim to achieve a fast and accurate three-dimensional (3D) simulation of a porcine liver deformation under a surgical tool pressure using the commercial finite element software Abaqus. The liver geometry is obtained using magnetic resonance imaging, and a nonlinear constitutive law is employed to capture large deformations of the tissue. Effects of implicit versus explicit analysis schemes, element type, and mesh density on computation time are studied. We find that Abaqus explicit and implicit solvers are capable of simulating nonlinear soft tissue deformations accurately using first-order tetrahedral elements in a relatively short time by optimizing the element size. This study provides new insights and guidance on accurate and relatively fast nonlinear soft tissue simulations. Such simulations can provide force feedback during robotic surgery and allow visualization of tissue deformations for surgery planning and training of surgical residents.

  15. Transport and installation of the Dark Energy Survey CCD imager

    NASA Astrophysics Data System (ADS)

    Derylo, Greg; Chi, Edward; Diehl, H. Thomas; Estrada, Juan; Flaugher, Brenna; Schultz, Ken

    2012-09-01

    The Dark Energy Survey CCD imager was constructed at the Fermi National Accelerator Laboratory and delivered to the Cerro Tololo Inter-American Observatory in Chile for installation onto the Blanco 4m telescope. Several efforts are described relating to preparation of the instrument for transport, development and testing of a shipping crate designed to minimize transportation loads transmitted to the camera, and inspection of the imager upon arrival at the observatory. Transportation loads were monitored and are described. For installation of the imager at the telescope prime focus, where it mates with its previously-installed optical corrector, specialized tooling was developed to safely lift, support, and position the vessel. The installation and removal processes were tested on the Telescope Simulator mockup at FNAL, thus minimizing technical and schedule risk for the work performed at CTIO. Final installation of the imager is scheduled for August 2012.

  16. Quantitative assessment of Cerenkov luminescence for radioguided brain tumor resection surgery

    NASA Astrophysics Data System (ADS)

    Klein, Justin S.; Mitchell, Gregory S.; Cherry, Simon R.

    2017-05-01

    Cerenkov luminescence imaging (CLI) is a developing imaging modality that detects radiolabeled molecules via visible light emitted during the radioactive decay process. We used a Monte Carlo based computer simulation to quantitatively investigate CLI compared to direct detection of the ionizing radiation itself as an intraoperative imaging tool for assessment of brain tumor margins. Our brain tumor model consisted of a 1 mm spherical tumor remnant embedded up to 5 mm in depth below the surface of normal brain tissue. Tumor to background contrast ranging from 2:1 to 10:1 were considered. We quantified all decay signals (e±, gamma photon, Cerenkov photons) reaching the brain volume surface. CLI proved to be the most sensitive method for detecting the tumor volume in both imaging and non-imaging strategies as assessed by contrast-to-noise ratio and by receiver operating characteristic output of a channelized Hotelling observer.

  17. Comparison of Monte Carlo simulated and measured performance parameters of miniPET scanner

    NASA Astrophysics Data System (ADS)

    Kis, S. A.; Emri, M.; Opposits, G.; Bükki, T.; Valastyán, I.; Hegyesi, Gy.; Imrek, J.; Kalinka, G.; Molnár, J.; Novák, D.; Végh, J.; Kerek, A.; Trón, L.; Balkay, L.

    2007-02-01

    In vivo imaging of small laboratory animals is a valuable tool in the development of new drugs. For this purpose, miniPET, an easy to scale modular small animal PET camera has been developed at our institutes. The system has four modules, which makes it possible to rotate the whole detector system around the axis of the field of view. Data collection and image reconstruction are performed using a data acquisition (DAQ) module with Ethernet communication facility and a computer cluster of commercial PCs. Performance tests were carried out to determine system parameters, such as energy resolution, sensitivity and noise equivalent count rate. A modified GEANT4-based GATE Monte Carlo software package was used to simulate PET data analogous to those of the performance measurements. GATE was run on a Linux cluster of 10 processors (64 bit, Xeon with 3.0 GHz) and controlled by a SUN grid engine. The application of this special computer cluster reduced the time necessary for the simulations by an order of magnitude. The simulated energy spectra, maximum rate of true coincidences and sensitivity of the camera were in good agreement with the measured parameters.

  18. Two worlds collide: Image analysis methods for quantifying structural variation in cluster molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steenbergen, K. G., E-mail: kgsteen@gmail.com; Gaston, N.

    2014-02-14

    Inspired by methods of remote sensing image analysis, we analyze structural variation in cluster molecular dynamics (MD) simulations through a unique application of the principal component analysis (PCA) and Pearson Correlation Coefficient (PCC). The PCA analysis characterizes the geometric shape of the cluster structure at each time step, yielding a detailed and quantitative measure of structural stability and variation at finite temperature. Our PCC analysis captures bond structure variation in MD, which can be used to both supplement the PCA analysis as well as compare bond patterns between different cluster sizes. Relying only on atomic position data, without requirement formore » a priori structural input, PCA and PCC can be used to analyze both classical and ab initio MD simulations for any cluster composition or electronic configuration. Taken together, these statistical tools represent powerful new techniques for quantitative structural characterization and isomer identification in cluster MD.« less

  19. Emergency Management Computer-Aided Trainer (EMCAT)

    NASA Technical Reports Server (NTRS)

    Rodriguez, R. C.; Johnson, R. P.

    1986-01-01

    The Emergency Management Computer-Aided Trainer (EMCAT) developed by Essex Corporation or NASA and the Federal Emergency Management Administration's (FEMA) National Fire Academy (NFA) is described. It is a computer based training system for fire fighting personnel. A prototype EMCAT system was developed by NASA first using video tape images and then video disk images when the technology became available. The EMCAT system is meant to fill the training needs of the fire fighting community with affordable state-of-the-art technologies. An automated real time simulation of the fire situation was needed to replace the outdated manual training methods currently being used. In order to be successful, this simulator had to provide realism, be user friendly, be affordable, and support multiple scenarios. The EMCAT system meets these requirements and therefore represents an innovative training tool, not only for the fire fighting community, but also for the needs of other disciplines.

  20. Hybrid photonic-plasmonic near-field probe for efficient light conversion into the nanoscale hot spot.

    PubMed

    Koshelev, Alexander; Munechika, Keiko; Cabrini, Stefano

    2017-11-01

    In this Letter, we present a design and simulations of the novel hybrid photonic-plasmonic near-field probe. Near-field optics is a unique imaging tool that provides optical images with resolution down to tens of nanometers. One of the main limitations of this technology is its low light sensitivity. The presented hybrid probe solves this problem by combining a campanile plasmonic probe with the photonic layer, consisting of the diffractive optic element (DOE). The DOE is designed to match the plasmonic field at the broad side of the campanile probe with the fiber mode. This makes it possible to optimize the size of the campanile tip to convert light efficiently into the hot spot. The simulations show that the hybrid probe is ∼540 times more efficient compared with the conventional campanile on average in the 600-900 nm spectral range.

  1. Two worlds collide: image analysis methods for quantifying structural variation in cluster molecular dynamics.

    PubMed

    Steenbergen, K G; Gaston, N

    2014-02-14

    Inspired by methods of remote sensing image analysis, we analyze structural variation in cluster molecular dynamics (MD) simulations through a unique application of the principal component analysis (PCA) and Pearson Correlation Coefficient (PCC). The PCA analysis characterizes the geometric shape of the cluster structure at each time step, yielding a detailed and quantitative measure of structural stability and variation at finite temperature. Our PCC analysis captures bond structure variation in MD, which can be used to both supplement the PCA analysis as well as compare bond patterns between different cluster sizes. Relying only on atomic position data, without requirement for a priori structural input, PCA and PCC can be used to analyze both classical and ab initio MD simulations for any cluster composition or electronic configuration. Taken together, these statistical tools represent powerful new techniques for quantitative structural characterization and isomer identification in cluster MD.

  2. Scaling Analysis of Ocean Surface Turbulent Heterogeneities from Satellite Remote Sensing: Use of 2D Structure Functions.

    PubMed

    Renosh, P R; Schmitt, Francois G; Loisel, Hubert

    2015-01-01

    Satellite remote sensing observations allow the ocean surface to be sampled synoptically over large spatio-temporal scales. The images provided from visible and thermal infrared satellite observations are widely used in physical, biological, and ecological oceanography. The present work proposes a method to understand the multi-scaling properties of satellite products such as the Chlorophyll-a (Chl-a), and the Sea Surface Temperature (SST), rarely studied. The specific objectives of this study are to show how the small scale heterogeneities of satellite images can be characterised using tools borrowed from the fields of turbulence. For that purpose, we show how the structure function, which is classically used in the frame of scaling time series analysis, can be used also in 2D. The main advantage of this method is that it can be applied to process images which have missing data. Based on both simulated and real images, we demonstrate that coarse-graining (CG) of a gradient modulus transform of the original image does not provide correct scaling exponents. We show, using a fractional Brownian simulation in 2D, that the structure function (SF) can be used with randomly sampled couple of points, and verify that 1 million of couple of points provides enough statistics.

  3. Digital simulation of staining in histopathology multispectral images: enhancement and linear transformation of spectral transmittance.

    PubMed

    Bautista, Pinky A; Yagi, Yukako

    2012-05-01

    Hematoxylin and eosin (H&E) stain is currently the most popular for routine histopathology staining. Special and/or immuno-histochemical (IHC) staining is often requested to further corroborate the initial diagnosis on H&E stained tissue sections. Digital simulation of staining (or digital staining) can be a very valuable tool to produce the desired stained images from the H&E stained tissue sections instantaneously. We present an approach to digital staining of histopathology multispectral images by combining the effects of spectral enhancement and spectral transformation. Spectral enhancement is accomplished by shifting the N-band original spectrum of the multispectral pixel with the weighted difference between the pixel's original and estimated spectrum; the spectrum is estimated using M < N principal component (PC) vectors. The pixel's enhanced spectrum is transformed to the spectral configuration associated to its reaction to a specific stain by utilizing an N × N transformation matrix, which is derived through application of least mean squares method to the enhanced and target spectral transmittance samples of the different tissue components found in the image. Results of our experiments on the digital conversion of an H&E stained multispectral image to its Masson's trichrome stained equivalent show the viability of the method.

  4. Photomask quality evaluation using lithography simulation and multi-detector MVM-SEM

    NASA Astrophysics Data System (ADS)

    Ito, Keisuke; Murakawa, Tsutomu; Fukuda, Naoki; Shida, Soichi; Iwai, Toshimichi; Matsumoto, Jun; Nakamura, Takayuki; Matsushita, Shohei; Hagiwara, Kazuyuki; Hara, Daisuke

    2013-06-01

    The detection and management of mask defects which are transferred onto wafer becomes more important day by day. As the photomask patterns becomes smaller and more complicated, using Inverse Lithography Technology (ILT) and Source Mask Optimization (SMO) with Optical Proximity Correction (OPC). To evaluate photomask quality, the current method uses aerial imaging by optical inspection tools. This technique at 1Xnm node has a resolution limit because small defects will be difficult to detect. We already reported the MEEF influence of high-end photomask using wide FOV SEM contour data of "E3630 MVM-SEM®" and lithography simulator "TrueMask® DS" of D2S Inc. in the prior paper [1]. In this paper we evaluate the correlation between our evaluation method and optical inspection tools as ongoing assessment. Also in order to reduce the defect classification work, we can compose the 3 Dimensional (3D) information of defects and can judge whether repairs of defects would be required. Moreover, we confirm the possibility of wafer plane CD measurement based on the combination between E3630 MVM-SEM® and 3D lithography simulation.

  5. Characterization of tissue-simulating phantom materials for ultrasound-guided needle procedures

    NASA Astrophysics Data System (ADS)

    Buchanan, Susan; Moore, John; Lammers, Deanna; Baxter, John; Peters, Terry

    2012-02-01

    Needle biopsies are standard protocols that are commonly performed under ultrasound (US) guidance or computed tomography (CT)1. Vascular access such as central line insertions, and many spinal needle therapies also rely on US guidance. Phantoms for these procedures are crucial as both training tools for clinicians and research tools for developing new guidance systems. Realistic imaging properties and material longevity are critical qualities for needle guidance phantoms. However, current commercially available phantoms for use with US guidance have many limitations, the most detrimental of which include harsh needle tracks obfuscating US images and a membrane comparable to human skin that does not allow seepage of inner media. To overcome these difficulties, we tested a variety of readily available media and membranes to evaluate optimal materials to fit our current needs. It was concluded that liquid hand soap was the best medium, as it instantly left no needle tracks, had an acceptable depth of US penetration and portrayed realistic imaging conditions, while because of its low leakage, low cost, acceptable durability and transparency, the optimal membrane was 10 gauge vinyl.

  6. WFIRST Science Operations at STScI

    NASA Astrophysics Data System (ADS)

    Gilbert, Karoline; STScI WFIRST Team

    2018-06-01

    With sensitivity and resolution comparable the Hubble Space Telescope, and a field of view 100 times larger, the Wide Field Instrument (WFI) on WFIRST will be a powerful survey instrument. STScI will be the Science Operations Center (SOC) for the WFIRST Mission, with additional science support provided by the Infrared Processing and Analysis Center (IPAC) and foreign partners. STScI will schedule and archive all WFIRST observations, calibrate and produce pipeline-reduced data products for imaging with the Wide Field Instrument, support the High Latitude Imaging and Supernova Survey Teams, and support the astronomical community in planning WFI imaging observations and analyzing the data. STScI has developed detailed concepts for WFIRST operations, including a data management system integrating data processing and the archive which will include a novel, cloud-based framework for high-level data processing, providing a common environment accessible to all users (STScI operations, Survey Teams, General Observers, and archival investigators). To aid the astronomical community in examining the capabilities of WFIRST, STScI has built several simulation tools. We describe the functionality of each tool and give examples of its use.

  7. Characterizing relationship between optical microangiography signals and capillary flow using microfluidic channels.

    PubMed

    Choi, Woo June; Qin, Wan; Chen, Chieh-Li; Wang, Jingang; Zhang, Qinqin; Yang, Xiaoqi; Gao, Bruce Z; Wang, Ruikang K

    2016-07-01

    Optical microangiography (OMAG) is a powerful optical angio-graphic tool to visualize micro-vascular flow in vivo. Despite numerous demonstrations for the past several years of the qualitative relationship between OMAG and flow, no convincing quantitative relationship has been proven. In this paper, we attempt to quantitatively correlate the OMAG signal with flow. Specifically, we develop a simplified analytical model of the complex OMAG, suggesting that the OMAG signal is a product of the number of particles in an imaging voxel and the decorrelation of OCT (optical coherence tomography) signal, determined by flow velocity, inter-frame time interval, and wavelength of the light source. Numerical simulation with the proposed model reveals that if the OCT amplitudes are correlated, the OMAG signal is related to a total number of particles across the imaging voxel cross-section per unit time (flux); otherwise it would be saturated but its strength is proportional to the number of particles in the imaging voxel (concentration). The relationship is validated using microfluidic flow phantoms with various preset flow metrics. This work suggests OMAG is a promising quantitative tool for the assessment of vascular flow.

  8. Simulation of the Simbol-X Telescope

    NASA Astrophysics Data System (ADS)

    Chauvin, M.; Roques, J. P.

    2009-05-01

    We have developed a simulation tool for a Wolter I telescope operating in formation flight. The aim is to understand and predict the behavior of the Simbol-X instrument. As the geometry is variable, formation flight introduces new challenges and complex implications. Our code, based on Monte Carlo ray tracing, computes the full photon trajectories up to the detector plane, along with the relative drifts of the two spacecrafts. It takes into account angle and energy dependent interactions of the photons with the mirrors and applies to any grazing incidence telescope. The resulting images of simulated sources from 0.1 keV to 100 keV allow us to optimize the configuration of the instrument and to assess the performance of the Simbol-X telescope.

  9. Immersive virtual reality as a teaching tool for neuroanatomy.

    PubMed

    Stepan, Katelyn; Zeiger, Joshua; Hanchuk, Stephanie; Del Signore, Anthony; Shrivastava, Raj; Govindaraj, Satish; Iloreta, Alfred

    2017-10-01

    Three-dimensional (3D) computer modeling and interactive virtual reality (VR) simulation are validated teaching techniques used throughout medical disciplines. Little objective data exists supporting its use in teaching clinical anatomy. Learner motivation is thought to limit the rate of utilization of such novel technologies. The purpose of this study is to evaluate the effectiveness, satisfaction, and motivation associated with immersive VR simulation in teaching medical students neuroanatomy. Images of normal cerebral anatomy were reconstructed from human Digital Imaging and Communications in Medicine (DICOM) computed tomography (CT) imaging and magnetic resonance imaging (MRI) into 3D VR formats compatible with the Oculus Rift VR System, a head-mounted display with tracking capabilities allowing for an immersive VR experience. The ventricular system and cerebral vasculature were highlighted and labeled to create a focused interactive model. We conducted a randomized controlled study with 66 medical students (33 in both the control and experimental groups). Pertinent neuroanatomical structures were studied using either online textbooks or the VR interactive model, respectively. We then evaluated the students' anatomy knowledge, educational experience, and motivation (using the Instructional Materials Motivation Survey [IMMS], a previously validated assessment). There was no significant difference in anatomy knowledge between the 2 groups on preintervention, postintervention, or retention quizzes. The VR group found the learning experience to be significantly more engaging, enjoyable, and useful (all p < 0.01) and scored significantly higher on the motivation assessment (p < 0.01). Immersive VR educational tools awarded a more positive learner experience and enhanced student motivation. However, the technology was equally as effective as the traditional text books in teaching neuroanatomy. © 2017 ARS-AAOA, LLC.

  10. Implementation of the Pan-STARRS Image Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Fang, Julia; Aspin, C.

    2007-12-01

    Pan-STARRS, or Panoramic Survey Telescope and Rapid Response System, is a wide-field imaging facility that combines small mirrors with gigapixel cameras. It surveys the entire available sky several times a month, which ultimately requires large amounts of data to be processed and stored right away. Accordingly, the Image Processing Pipeline--the IPP--is a collection of software tools that is responsible for the primary image analysis for Pan-STARRS. It includes data registration, basic image analysis such as obtaining master images and detrending the exposures, mosaic calibration when applicable, and lastly, image sum and difference. In this paper I present my work of the installation of IPP 2.1 and 2.2 on a Linux machine, running the Simtest, which is simulated data to test your installation, and finally applying the IPP to two different sets of UH 2.2m Tek data. This work was conducted by a Research Experience for Undergraduates (REU) position at the University of Hawaii's Institute for Astronomy and funded by the NSF.

  11. Conversion of mammographic images to appear with the noise and sharpness characteristics of a different detector and x-ray system.

    PubMed

    Mackenzie, Alistair; Dance, David R; Workman, Adam; Yip, Mary; Wells, Kevin; Young, Kenneth C

    2012-05-01

    Undertaking observer studies to compare imaging technology using clinical radiological images is challenging due to patient variability. To achieve a significant result, a large number of patients would be required to compare cancer detection rates for different image detectors and systems. The aim of this work was to create a methodology where only one set of images is collected on one particular imaging system. These images are then converted to appear as if they had been acquired on a different detector and x-ray system. Therefore, the effect of a wide range of digital detectors on cancer detection or diagnosis can be examined without the need for multiple patient exposures. Three detectors and x-ray systems [Hologic Selenia (ASE), GE Essential (CSI), Carestream CR (CR)] were characterized in terms of signal transfer properties, noise power spectra (NPS), modulation transfer function, and grid properties. The contributions of the three noise sources (electronic, quantum, and structure noise) to the NPS were calculated by fitting a quadratic polynomial at each spatial frequency of the NPS against air kerma. A methodology was developed to degrade the images to have the characteristics of a different (target) imaging system. The simulated images were created by first linearizing the original images such that the pixel values were equivalent to the air kerma incident at the detector. The linearized image was then blurred to match the sharpness characteristics of the target detector. Noise was then added to the blurred image to correct for differences between the detectors and any required change in dose. The electronic, quantum, and structure noise were added appropriate to the air kerma selected for the simulated image and thus ensuring that the noise in the simulated image had the same magnitude and correlation as the target image. A correction was also made for differences in primary grid transmission, scatter, and veiling glare. The method was validated by acquiring images of a CDMAM contrast detail test object (Artinis, The Netherlands) at five different doses for the three systems. The ASE CDMAM images were then converted to appear with the imaging characteristics of target CR and CSI detectors. The measured threshold gold thicknesses of the simulated and target CDMAM images were closely matched at normal dose level and the average differences across the range of detail diameters were -4% and 0% for the CR and CSI systems, respectively. The conversion was successful for images acquired over a wide dose range. The average difference between simulated and target images for a given dose was a maximum of 11%. The validation shows that the image quality of a digital mammography image obtained with a particular system can be degraded, in terms of noise magnitude and color, sharpness, and contrast to account for differences in the detector and antiscatter grid. Potentially, this is a powerful tool for observer studies, as a range of image qualities can be examined by modifying an image set obtained at a single (better) image quality thus removing the patient variability when comparing systems.

  12. WE-AB-207A-12: HLCC Based Quantitative Evaluation Method of Image Artifact in Dental CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Y; Wu, S; Qi, H

    Purpose: Image artifacts are usually evaluated qualitatively via visual observation of the reconstructed images, which is susceptible to subjective factors due to the lack of an objective evaluation criterion. In this work, we propose a Helgason-Ludwig consistency condition (HLCC) based evaluation method to quantify the severity level of different image artifacts in dental CBCT. Methods: Our evaluation method consists of four step: 1) Acquire Cone beam CT(CBCT) projection; 2) Convert 3D CBCT projection to fan-beam projection by extracting its central plane projection; 3) Convert fan-beam projection to parallel-beam projection utilizing sinogram-based rebinning algorithm or detail-based rebinning algorithm; 4) Obtain HLCCmore » profile by integrating parallel-beam projection per view and calculate wave percentage and variance of the HLCC profile, which can be used to describe the severity level of image artifacts. Results: Several sets of dental CBCT projections containing only one type of artifact (i.e. geometry, scatter, beam hardening, lag and noise artifact), were simulated using gDRR, a GPU tool developed for efficient, accurate, and realistic simulation of CBCT Projections. These simulated CBCT projections were used to test our proposed method. HLCC profile wave percentage and variance induced by geometry distortion are about 3∼21 times and 16∼393 times as large as that of the artifact-free projection, respectively. The increase factor of wave percentage and variance are 6 and133 times for beam hardening, 19 and 1184 times for scatter, and 4 and16 times for lag artifacts, respectively. In contrast, for noisy projection the wave percentage, variance and inconsistency level are almost the same with those of the noise-free one. Conclusion: We have proposed a quantitative evaluation method of image artifact based on HLCC theory. According to our simulation results, the severity of different artifact types is found to be in a following order: Scatter>Geometry>Beam hardening>Lag>Noise>Artifact-free in dental CBCT.« less

  13. Investigation of multichannel phased array performance for fetal MR imaging on 1.5T clinical MR system

    PubMed Central

    Li, Ye; Pang, Yong; Vigneron, Daniel; Glenn, Orit; Xu, Duan; Zhang, Xiaoliang

    2011-01-01

    Fetal MRI on 1.5T clinical scanner has been increasingly becoming a powerful imaging tool for studying fetal brain abnormalities in vivo. Due to limited availability of dedicated fetal phased arrays, commercial torso or cardiac phased arrays are routinely used for fetal scans, which are unable to provide optimized SNR and parallel imaging performance with a small number coil elements, and insufficient coverage and filling factor. This poses a demand for the investigation and development of dedicated and efficient radiofrequency (RF) hardware to improve fetal imaging. In this work, an investigational approach to simulate the performance of multichannel flexible phased arrays is proposed to find a better solution to fetal MR imaging. A 32 channel fetal array is presented to increase coil sensitivity, coverage and parallel imaging performance. The electromagnetic field distribution of each element of the fetal array is numerically simulated by using finite-difference time-domain (FDTD) method. The array performance, including B1 coverage, parallel reconstructed images and artifact power, is then theoretically calculated and compared with the torso array. Study results show that the proposed array is capable of increasing B1 field strength as well as sensitivity homogeneity in the entire area of uterus. This would ensure high quality imaging regardless of the location of the fetus in the uterus. In addition, the paralleling imaging performance of the proposed fetal array is validated by using artifact power comparison with torso array. These results demonstrate the feasibility of the 32 channel flexible array for fetal MR imaging at 1.5T. PMID:22408747

  14. Formation of parametric images using mixed-effects models: a feasibility study.

    PubMed

    Huang, Husan-Ming; Shih, Yi-Yu; Lin, Chieh

    2016-03-01

    Mixed-effects models have been widely used in the analysis of longitudinal data. By presenting the parameters as a combination of fixed effects and random effects, mixed-effects models incorporating both within- and between-subject variations are capable of improving parameter estimation. In this work, we demonstrate the feasibility of using a non-linear mixed-effects (NLME) approach for generating parametric images from medical imaging data of a single study. By assuming that all voxels in the image are independent, we used simulation and animal data to evaluate whether NLME can improve the voxel-wise parameter estimation. For testing purposes, intravoxel incoherent motion (IVIM) diffusion parameters including perfusion fraction, pseudo-diffusion coefficient and true diffusion coefficient were estimated using diffusion-weighted MR images and NLME through fitting the IVIM model. The conventional method of non-linear least squares (NLLS) was used as the standard approach for comparison of the resulted parametric images. In the simulated data, NLME provides more accurate and precise estimates of diffusion parameters compared with NLLS. Similarly, we found that NLME has the ability to improve the signal-to-noise ratio of parametric images obtained from rat brain data. These data have shown that it is feasible to apply NLME in parametric image generation, and the parametric image quality can be accordingly improved with the use of NLME. With the flexibility to be adapted to other models or modalities, NLME may become a useful tool to improve the parametric image quality in the future. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Multiphasic modelling of bone-cement injection into vertebral cancellous bone.

    PubMed

    Bleiler, Christian; Wagner, Arndt; Stadelmann, Vincent A; Windolf, Markus; Köstler, Harald; Boger, Andreas; Gueorguiev-Rüegg, Boyko; Ehlers, Wolfgang; Röhrle, Oliver

    2015-01-01

    Percutaneous vertebroplasty represents a current procedure to effectively reinforce osteoporotic bone via the injection of bone cement. This contribution considers a continuum-mechanically based modelling approach and simulation techniques to predict the cement distributions within a vertebra during injection. To do so, experimental investigations, imaging data and image processing techniques are combined and exploited to extract necessary data from high-resolution μCT image data. The multiphasic model is based on the Theory of Porous Media, providing the theoretical basis to describe within one set of coupled equations the interaction of an elastically deformable solid skeleton, of liquid bone cement and the displacement of liquid bone marrow. The simulation results are validated against an experiment, in which bone cement was injected into a human vertebra under realistic conditions. The major advantage of this comprehensive modelling approach is the fact that one can not only predict the complex cement flow within an entire vertebra but is also capable of taking into account solid deformations in a fully coupled manner. The presented work is the first step towards the ultimate and future goal of extending this framework to a clinical tool allowing for pre-operative cement distribution predictions by means of numerical simulations. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Imaging lateral groundwater flow in the shallow subsurface using stochastic temperature fields

    NASA Astrophysics Data System (ADS)

    Fairley, Jerry P.; Nicholson, Kirsten N.

    2006-04-01

    Although temperature has often been used as an indication of vertical groundwater movement, its usefulness for identifying horizontal fluid flow has been limited by the difficulty of obtaining sufficient data to draw defensible conclusions. Here we use stochastic simulation to develop a high-resolution image of fluid temperatures in the shallow subsurface at Borax Lake, Oregon. The temperature field inferred from the geostatistical simulations clearly shows geothermal fluids discharging from a group of fault-controlled hydrothermal springs, moving laterally through the subsurface, and mixing with shallow subsurface flow originating from nearby Borax Lake. This interpretation of the data is supported by independent geochemical and isotopic evidence, which show a simple mixing trend between Borax Lake water and discharge from the thermal springs. It is generally agreed that stochastic simulation can be a useful tool for extracting information from complex and/or noisy data and, although not appropriate in all situations, geostatistical analysis may provide good definition of flow paths in the shallow subsurface. Although stochastic imaging techniques are well known in problems involving transport of species, e.g. delineation of contaminant plumes from soil gas survey data, we are unaware of previous applications to the transport of thermal energy for the purpose of inferring shallow groundwater flow.

  17. Design of a digital phantom population for myocardial perfusion SPECT imaging research.

    PubMed

    Ghaly, Michael; Du, Yong; Fung, George S K; Tsui, Benjamin M W; Links, Jonathan M; Frey, Eric

    2014-06-21

    Digital phantoms and Monte Carlo (MC) simulations have become important tools for optimizing and evaluating instrumentation, acquisition and processing methods for myocardial perfusion SPECT (MPS). In this work, we designed a new adult digital phantom population and generated corresponding Tc-99m and Tl-201 projections for use in MPS research. The population is based on the three-dimensional XCAT phantom with organ parameters sampled from the Emory PET Torso Model Database. Phantoms included three variations each in body size, heart size, and subcutaneous adipose tissue level, for a total of 27 phantoms of each gender. The SimSET MC code and angular response functions were used to model interactions in the body and the collimator-detector system, respectively. We divided each phantom into seven organs, each simulated separately, allowing use of post-simulation summing to efficiently model uptake variations. Also, we adapted and used a criterion based on the relative Poisson effective count level to determine the required number of simulated photons for each simulated organ. This technique provided a quantitative estimate of the true noise in the simulated projection data, including residual MC simulation noise. Projections were generated in 1 keV wide energy windows from 48-184 keV assuming perfect energy resolution to permit study of the effects of window width, energy resolution, and crosstalk in the context of dual isotope MPS. We have developed a comprehensive method for efficiently simulating realistic projections for a realistic population of phantoms in the context of MPS imaging. The new phantom population and realistic database of simulated projections will be useful in performing mathematical and human observer studies to evaluate various acquisition and processing methods such as optimizing the energy window width, investigating the effect of energy resolution on image quality and evaluating compensation methods for degrading factors such as crosstalk in the context of single and dual isotope MPS.

  18. Design of a digital phantom population for myocardial perfusion SPECT imaging research

    NASA Astrophysics Data System (ADS)

    Ghaly, Michael; Du, Yong; Fung, George S. K.; Tsui, Benjamin M. W.; Links, Jonathan M.; Frey, Eric

    2014-06-01

    Digital phantoms and Monte Carlo (MC) simulations have become important tools for optimizing and evaluating instrumentation, acquisition and processing methods for myocardial perfusion SPECT (MPS). In this work, we designed a new adult digital phantom population and generated corresponding Tc-99m and Tl-201 projections for use in MPS research. The population is based on the three-dimensional XCAT phantom with organ parameters sampled from the Emory PET Torso Model Database. Phantoms included three variations each in body size, heart size, and subcutaneous adipose tissue level, for a total of 27 phantoms of each gender. The SimSET MC code and angular response functions were used to model interactions in the body and the collimator-detector system, respectively. We divided each phantom into seven organs, each simulated separately, allowing use of post-simulation summing to efficiently model uptake variations. Also, we adapted and used a criterion based on the relative Poisson effective count level to determine the required number of simulated photons for each simulated organ. This technique provided a quantitative estimate of the true noise in the simulated projection data, including residual MC simulation noise. Projections were generated in 1 keV wide energy windows from 48-184 keV assuming perfect energy resolution to permit study of the effects of window width, energy resolution, and crosstalk in the context of dual isotope MPS. We have developed a comprehensive method for efficiently simulating realistic projections for a realistic population of phantoms in the context of MPS imaging. The new phantom population and realistic database of simulated projections will be useful in performing mathematical and human observer studies to evaluate various acquisition and processing methods such as optimizing the energy window width, investigating the effect of energy resolution on image quality and evaluating compensation methods for degrading factors such as crosstalk in the context of single and dual isotope MPS.

  19. Mid-infrared hyperspectral imaging for the detection of explosive compounds

    NASA Astrophysics Data System (ADS)

    Ruxton, K.; Robertson, G.; Miller, W.; Malcolm, G. P. A.; Maker, G. T.

    2012-10-01

    Active hyperspectral imaging is a valuable tool in a wide range of applications. A developing market is the detection and identification of energetic compounds through analysis of the resulting absorption spectrum. This work presents a selection of results from a prototype mid-infrared (MWIR) hyperspectral imaging instrument that has successfully been used for compound detection at a range of standoff distances. Active hyperspectral imaging utilises a broadly tunable laser source to illuminate the scene with light over a range of wavelengths. While there are a number of illumination methods, this work illuminates the scene by raster scanning the laser beam using a pair of galvanometric mirrors. The resulting backscattered light from the scene is collected by the same mirrors and directed and focussed onto a suitable single-point detector, where the image is constructed pixel by pixel. The imaging instrument that was developed in this work is based around a MWIR optical parametric oscillator (OPO) source with broad tunability, operating at 2.6 μm to 3.7 μm. Due to material handling procedures associated with explosive compounds, experimental work was undertaken initially using simulant compounds. A second set of compounds that was tested alongside the simulant compounds is a range of confusion compounds. By having the broad wavelength tunability of the OPO, extended absorption spectra of the compounds could be obtained to aid in compound identification. The prototype imager instrument has successfully been used to record the absorption spectra for a range of compounds from the simulant and confusion sets and current work is now investigating actual explosive compounds. The authors see a very promising outlook for the MWIR hyperspectral imager. From an applications point of view this format of imaging instrument could be used for a range of standoff, improvised explosive device (IED) detection applications and potential incident scene forensic investigation.

  20. pyZELDA: Python code for Zernike wavefront sensors

    NASA Astrophysics Data System (ADS)

    Vigan, A.; N'Diaye, M.

    2018-06-01

    pyZELDA analyzes data from Zernike wavefront sensors dedicated to high-contrast imaging applications. This modular software was originally designed to analyze data from the ZELDA wavefront sensor prototype installed in VLT/SPHERE; simple configuration files allow it to be extended to support several other instruments and testbeds. pyZELDA also includes simple simulation tools to measure the theoretical sensitivity of a sensor and to compare it to other sensors.

  1. Launch Control System Software Development System Automation Testing

    NASA Technical Reports Server (NTRS)

    Hwang, Andrew

    2017-01-01

    The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administration's (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This system requires high quality testing that will measure and test the capabilities of the system. For the past two years, the Exploration and Operations Division at Kennedy Space Center (KSC) has assigned a group including interns and full-time engineers to develop automated tests to save the project time and money. The team worked on automating the testing process for the SCCS GUI that would use streamed simulated data from the testing servers to produce data, plots, statuses, etc. to the GUI. The software used to develop automated tests included an automated testing framework and an automation library. The automated testing framework has a tabular-style syntax, which means the functionality of a line of code must have the appropriate number of tabs for the line to function as intended. The header section contains either paths to custom resources or the names of libraries being used. The automation library contains functionality to automate anything that appears on a desired screen with the use of image recognition software to detect and control GUI components. The data section contains any data values strictly created for the current testing file. The body section holds the tests that are being run. The function section can include any number of functions that may be used by the current testing file or any other file that resources it. The resources and body section are required for all test files; the data and function sections can be left empty if the data values and functions being used are from a resourced library or another file. To help equip the automation team with better tools, the Project Lead of the Automated Testing Team, Jason Kapusta, assigned the task to install and train an optical character recognition (OCR) tool to Brandon Echols, a fellow intern, and I. The purpose of the OCR tool is to analyze an image and find the coordinates of any group of text. Some issues that arose while installing the OCR tool included the absence of certain libraries needed to train the tool and an outdated software version. We eventually resolved the issues and successfully installed the OCR tool. Training the tool required many images and different fonts and sizes, but in the end the tool learned to accurately decipher the text in the images and their coordinates. The OCR tool produced a file that contained significant metadata for each section of text, but only the text and coordinates of the text was required for our purpose. The team made a script to parse the information we wanted from the OCR file to a different file that would be used by automation functions within the automated framework. Since a majority of development and testing for the automated test cases for the GUI in question has been done using live simulated data on the workstations at the Launch Control Center (LCC), a large amount of progress has been made. As of this writing, about 60% of all of automated testing has been implemented. Additionally, the OCR tool will help make our automated tests more robust due to the tool's text recognition being highly scalable to different text fonts and text sizes. Soon we will have the whole test system automated, allowing for more full-time engineers working on development projects.

  2. Extending simulation modeling to activity-based costing for clinical procedures.

    PubMed

    Glick, N D; Blackmore, C C; Zelman, W N

    2000-04-01

    A simulation model was developed to measure costs in an Emergency Department setting for patients presenting with possible cervical-spine injury who needed radiological imaging. Simulation, a tool widely used to account for process variability but typically focused on utilization and throughput analysis, is being introduced here as a realistic means to perform an activity-based-costing (ABC) analysis, because traditional ABC methods have difficulty coping with process variation in healthcare. Though the study model has a very specific application, it can be generalized to other settings simply by changing the input parameters. In essence, simulation was found to be an accurate and viable means to conduct an ABC analysis; in fact, the output provides more complete information than could be achieved through other conventional analyses, which gives management more leverage with which to negotiate contractual reimbursements.

  3. Methodology for functional MRI of simulated driving.

    PubMed

    Kan, Karen; Schweizer, Tom A; Tam, Fred; Graham, Simon J

    2013-01-01

    The developed world faces major socioeconomic and medical challenges associated with motor vehicle accidents caused by risky driving. Functional magnetic resonance imaging (fMRI) of individuals using virtual reality driving simulators may provide an important research tool to assess driving safety, based on brain activity and behavior. A fMRI-compatible driving simulator was developed and evaluated in the context of straight driving, turning, and stopping in 16 young healthy adults. Robust maps of brain activity were obtained, including activation of the primary motor cortex, cerebellum, visual cortex, and parietal lobe, with limited head motion (<1.5 mm deviation from mean head position in the superior∕inferior direction in all subjects) and only minor correlations between head motion, steering, or braking behavior. These results are consistent with previous literature and suggest that with care, fMRI of simulated driving is a feasible undertaking.

  4. A dual-waveband dynamic IR scene projector based on DMD

    NASA Astrophysics Data System (ADS)

    Hu, Yu; Zheng, Ya-wei; Gao, Jiao-bo; Sun, Ke-feng; Li, Jun-na; Zhang, Lei; Zhang, Fang

    2016-10-01

    Infrared scene simulation system can simulate multifold objects and backgrounds to perform dynamic test and evaluate EO detecting system in the hardware in-the-loop test. The basic structure of a dual-waveband dynamic IR scene projector was introduced in the paper. The system's core device is an IR Digital Micro-mirror Device (DMD) and the radiant source is a mini-type high temperature IR plane black-body. An IR collimation optical system which transmission range includes 3-5μm and 8-12μm is designed as the projection optical system. Scene simulation software was developed with Visual C++ and Vega soft tools and a software flow chart was presented. The parameters and testing results of the system were given, and this system was applied with satisfying performance in an IR imaging simulation testing.

  5. Real-space Wigner-Seitz Cells Imaging of Potassium on Graphite via Elastic Atomic Manipulation

    PubMed Central

    Yin, Feng; Koskinen, Pekka; Kulju, Sampo; Akola, Jaakko; Palmer, Richard E.

    2015-01-01

    Atomic manipulation in the scanning tunnelling microscopy, conventionally a tool to build nanostructures one atom at a time, is here employed to enable the atomic-scale imaging of a model low-dimensional system. Specifically, we use low-temperature STM to investigate an ultra thin film (4 atomic layers) of potassium created by epitaxial growth on a graphite substrate. The STM images display an unexpected honeycomb feature, which corresponds to a real-space visualization of the Wigner-Seitz cells of the close-packed surface K atoms. Density functional simulations indicate that this behaviour arises from the elastic, tip-induced vertical manipulation of potassium atoms during imaging, i.e. elastic atomic manipulation, and reflects the ultrasoft properties of the surface under strain. The method may be generally applicable to other soft e.g. molecular or biomolecular systems. PMID:25651973

  6. Remote Sensing Time Series Product Tool

    NASA Technical Reports Server (NTRS)

    Predos, Don; Ryan, Robert E.; Ross, Kenton W.

    2006-01-01

    The TSPT (Time Series Product Tool) software was custom-designed for NASA to rapidly create and display single-band and band-combination time series, such as NDVI (Normalized Difference Vegetation Index) images, for wide-area crop surveillance and for other time-critical applications. The TSPT, developed in MATLAB, allows users to create and display various MODIS (Moderate Resolution Imaging Spectroradiometer) or simulated VIIRS (Visible/Infrared Imager Radiometer Suite) products as single images, as time series plots at a selected location, or as temporally processed image videos. Manually creating these types of products is extremely labor intensive; however, the TSPT development tool makes the process simplified and efficient. MODIS is ideal for monitoring large crop areas because of its wide swath (2330 km), its relatively small ground sample distance (250 m), and its high temporal revisit time (twice daily). Furthermore, because MODIS imagery is acquired daily, rapid changes in vegetative health can potentially be detected. The new TSPT technology provides users with the ability to temporally process high-revisit-rate satellite imagery, such as that acquired from MODIS and from its successor, the VIIRS. The TSPT features the important capability of fusing data from both MODIS instruments onboard the Terra and Aqua satellites, which drastically improves cloud statistics. With the TSPT, MODIS metadata is used to find and optionally remove bad and suspect data. Noise removal and temporal processing techniques allow users to create low-noise time series plots and image videos and to select settings and thresholds that tailor particular output products. The TSPT GUI (graphical user interface) provides an interactive environment for crafting what-if scenarios by enabling a user to repeat product generation using different settings and thresholds. The TSPT Application Programming Interface provides more fine-tuned control of product generation, allowing experienced programmers to bypass the GUI and to create more user-specific output products, such as comparison time plots or images. This type of time series analysis tool for remotely sensed imagery could be the basis of a large-area vegetation surveillance system. The TSPT has been used to generate NDVI time series over growing seasons in California and Argentina and for hurricane events, such as Hurricane Katrina.

  7. Using CAD software to simulate PV energy yield - The case of product integrated photovoltaic operated under indoor solar irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reich, N.H.; van Sark, W.G.J.H.M.; Turkenburg, W.C.

    2010-08-15

    In this paper, we show that photovoltaic (PV) energy yields can be simulated using standard rendering and ray-tracing features of Computer Aided Design (CAD) software. To this end, three-dimensional (3-D) sceneries are ray-traced in CAD. The PV power output is then modeled by translating irradiance intensity data of rendered images back into numerical data. To ensure accurate results, the solar irradiation data used as input is compared to numerical data obtained from rendered images, showing excellent agreement. As expected, also ray-tracing precision in the CAD software proves to be very high. To demonstrate PV energy yield simulations using this innovativemore » concept, solar radiation time course data of a few days was modeled in 3-D to simulate distributions of irradiance incident on flat, single- and double-bend shapes and a PV powered computer mouse located on a window sill. Comparisons of measured to simulated PV output of the mouse show that also in practice, simulation accuracies can be very high. Theoretically, this concept has great potential, as it can be adapted to suit a wide range of solar energy applications, such as sun-tracking and concentrator systems, Building Integrated PV (BIPV) or Product Integrated PV (PIPV). However, graphical user interfaces of 'CAD-PV' software tools are not yet available. (author)« less

  8. Automated Breast Ultrasound for Ductal Pattern Reconstruction: Ground Truth File Generation and CADe Evaluation

    NASA Astrophysics Data System (ADS)

    Manousaki, D.; Panagiotopoulou, A.; Bizimi, V.; Haynes, M. S.; Love, S.; Kallergi, M.

    2017-11-01

    The purpose of this study was the generation of ground truth files (GTFs) of the breast ducts from 3D images of the Invenia™ Automated Breast Ultrasound System (ABUS) system (GE Healthcare, Little Chalfont, UK) and the application of these GTFs for the optimization of the imaging protocol and the evaluation of a computer aided detection (CADe) algorithm developed for automated duct detection. Six lactating, nursing volunteers were scanned with the ABUS before and right after breastfeeding their infants. An expert in breast ultrasound generated rough outlines of the milk-filled ducts in the transaxial slices of all image volumes and the final GTFs were created by using thresholding and smoothing tools in ImageJ. In addition, a CADe algorithm automatically segmented duct like areas and its results were compared to the expert’s GTFs by estimating true positive fraction (TPF) or % overlap. The CADe output differed significantly from the expert’s but both detected a smaller than expected volume of the ducts due to insufficient contrast (ducts were partially filled with milk), discontinuities, and artifacts. GTFs were used to modify the imaging protocol and improve the CADe method. In conclusion, electronic GTFs provide a valuable tool in the optimization of a tomographic imaging system, the imaging protocol, and the CADe algorithms. Their generation, however, is an extremely time consuming, strenuous process, particularly for multi-slice examinations, and alternatives based on phantoms or simulations are highly desirable.

  9. Detection of hypercholesterolemia using hyperspectral imaging of human skin

    NASA Astrophysics Data System (ADS)

    Milanic, Matija; Bjorgan, Asgeir; Larsson, Marcus; Strömberg, Tomas; Randeberg, Lise L.

    2015-07-01

    Hypercholesterolemia is characterized by high blood levels of cholesterol and is associated with increased risk of atherosclerosis and cardiovascular disease. Xanthelasma is a subcutaneous lesion appearing in the skin around the eyes. Xanthelasma is related to hypercholesterolemia. Identifying micro-xanthelasma can thereforeprovide a mean for early detection of hypercholesterolemia and prevent onset and progress of disease. The goal of this study was to investigate spectral and spatial characteristics of hypercholesterolemia in facial skin. Optical techniques like hyperspectral imaging (HSI) might be a suitable tool for such characterization as it simultaneously provides high resolution spatial and spectral information. In this study a 3D Monte Carlo model of lipid inclusions in human skin was developed to create hyperspectral images in the spectral range 400-1090 nm. Four lesions with diameters 0.12-1.0 mm were simulated for three different skin types. The simulations were analyzed using three algorithms: the Tissue Indices (TI), the two layer Diffusion Approximation (DA), and the Minimum Noise Fraction transform (MNF). The simulated lesions were detected by all methods, but the best performance was obtained by the MNF algorithm. The results were verified using data from 11 volunteers with known cholesterol levels. The face of the volunteers was imaged by a LCTF system (400- 720 nm), and the images were analyzed using the previously mentioned algorithms. The identified features were then compared to the known cholesterol levels of the subjects. Significant correlation was obtained for the MNF algorithm only. This study demonstrates that HSI can be a promising, rapid modality for detection of hypercholesterolemia.

  10. Simulation based mask defect repair verification and disposition

    NASA Astrophysics Data System (ADS)

    Guo, Eric; Zhao, Shirley; Zhang, Skin; Qian, Sandy; Cheng, Guojie; Vikram, Abhishek; Li, Ling; Chen, Ye; Hsiang, Chingyun; Zhang, Gary; Su, Bo

    2009-10-01

    As the industry moves towards sub-65nm technology nodes, the mask inspection, with increased sensitivity and shrinking critical defect size, catches more and more nuisance and false defects. Increased defect counts pose great challenges in the post inspection defect classification and disposition: which defect is real defect, and among the real defects, which defect should be repaired and how to verify the post-repair defects. In this paper, we address the challenges in mask defect verification and disposition, in particular, in post repair defect verification by an efficient methodology, using SEM mask defect images, and optical inspection mask defects images (only for verification of phase and transmission related defects). We will demonstrate the flow using programmed mask defects in sub-65nm technology node design. In total 20 types of defects were designed including defects found in typical real circuit environments with 30 different sizes designed for each type. The SEM image was taken for each programmed defect after the test mask was made. Selected defects were repaired and SEM images from the test mask were taken again. Wafers were printed with the test mask before and after repair as defect printability references. A software tool SMDD-Simulation based Mask Defect Disposition-has been used in this study. The software is used to extract edges from the mask SEM images and convert them into polygons to save in GDSII format. Then, the converted polygons from the SEM images were filled with the correct tone to form mask patterns and were merged back into the original GDSII design file. This merge is for the purpose of contour simulation-since normally the SEM images cover only small area (~1 μm) and accurate simulation requires including larger area of optical proximity effect. With lithography process model, the resist contour of area of interest (AOI-the area surrounding a mask defect) can be simulated. If such complicated model is not available, a simple optical model can be used to get simulated aerial image intensity in the AOI. With built-in contour analysis functions, the SMDD software can easily compare the contour (or intensity) differences between defect pattern and normal pattern. With user provided judging criteria, this software can be easily disposition the defect based on contour comparison. In addition, process sensitivity properties, like MEEF and NILS, can be readily obtained in the AOI with a lithography model, which will make mask defect disposition criteria more intelligent.

  11. A Computational Framework for Bioimaging Simulation

    PubMed Central

    Watabe, Masaki; Arjunan, Satya N. V.; Fukushima, Seiya; Iwamoto, Kazunari; Kozuka, Jun; Matsuoka, Satomi; Shindo, Yuki; Ueda, Masahiro; Takahashi, Koichi

    2015-01-01

    Using bioimaging technology, biologists have attempted to identify and document analytical interpretations that underlie biological phenomena in biological cells. Theoretical biology aims at distilling those interpretations into knowledge in the mathematical form of biochemical reaction networks and understanding how higher level functions emerge from the combined action of biomolecules. However, there still remain formidable challenges in bridging the gap between bioimaging and mathematical modeling. Generally, measurements using fluorescence microscopy systems are influenced by systematic effects that arise from stochastic nature of biological cells, the imaging apparatus, and optical physics. Such systematic effects are always present in all bioimaging systems and hinder quantitative comparison between the cell model and bioimages. Computational tools for such a comparison are still unavailable. Thus, in this work, we present a computational framework for handling the parameters of the cell models and the optical physics governing bioimaging systems. Simulation using this framework can generate digital images of cell simulation results after accounting for the systematic effects. We then demonstrate that such a framework enables comparison at the level of photon-counting units. PMID:26147508

  12. A High Performance Pulsatile Pump for Aortic Flow Experiments in 3-Dimensional Models.

    PubMed

    Chaudhury, Rafeed A; Atlasman, Victor; Pathangey, Girish; Pracht, Nicholas; Adrian, Ronald J; Frakes, David H

    2016-06-01

    Aortic pathologies such as coarctation, dissection, and aneurysm represent a particularly emergent class of cardiovascular diseases. Computational simulations of aortic flows are growing increasingly important as tools for gaining understanding of these pathologies, as well as for planning their surgical repair. In vitro experiments are required to validate the simulations against real world data, and the experiments require a pulsatile flow pump system that can provide physiologic flow conditions characteristic of the aorta. We designed a newly capable piston-based pulsatile flow pump system that can generate high volume flow rates (850 mL/s), replicate physiologic waveforms, and pump high viscosity fluids against large impedances. The system is also compatible with a broad range of fluid types, and is operable in magnetic resonance imaging environments. Performance of the system was validated using image processing-based analysis of piston motion as well as particle image velocimetry. The new system represents a more capable pumping solution for aortic flow experiments than other available designs, and can be manufactured at a relatively low cost.

  13. Medium-energy heavy-ion single-event-burnout imaging of power MOSFETs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musseau, O.; Torres, A.; Campbell, A.B.

    The authors present the first experimental determination of the SEB sensitive area in a power MOSFET irradiated with a high-LET heavy-ion microbeam. They used a spectroscopy technique to perform coincident measurements of the charge collected in both source and drain junctions together, with a non-destructive technique (current limitation). The resulting charge collection images are related to the physical structure of the individual cells. These experimental data reveal the complex 3-dimensional behavior of a real structure, which can not easily be simulated using available tools. As the drain voltage is increased, the onset of burnout is reached, characterized by a suddenmore » change in the charge collection image. Hot spots are observed where the collected charge reaches its maximum value. Those spots, due to burnout triggering events, correspond to areas where the silicon is degraded through thermal effects along a single ion track. This direct observation of SEB sensitive areas as applications for, either device hardening, by modifying doping profiles or layout of the cells, or for code calibration and device simulation.« less

  14. Scalable isosurface visualization of massive datasets on commodity off-the-shelf clusters

    PubMed Central

    Bajaj, Chandrajit

    2009-01-01

    Tomographic imaging and computer simulations are increasingly yielding massive datasets. Interactive and exploratory visualizations have rapidly become indispensable tools to study large volumetric imaging and simulation data. Our scalable isosurface visualization framework on commodity off-the-shelf clusters is an end-to-end parallel and progressive platform, from initial data access to the final display. Interactive browsing of extracted isosurfaces is made possible by using parallel isosurface extraction, and rendering in conjunction with a new specialized piece of image compositing hardware called Metabuffer. In this paper, we focus on the back end scalability by introducing a fully parallel and out-of-core isosurface extraction algorithm. It achieves scalability by using both parallel and out-of-core processing and parallel disks. It statically partitions the volume data to parallel disks with a balanced workload spectrum, and builds I/O-optimal external interval trees to minimize the number of I/O operations of loading large data from disk. We also describe an isosurface compression scheme that is efficient for progress extraction, transmission and storage of isosurfaces. PMID:19756231

  15. DoctorEye: A clinically driven multifunctional platform, for accurate processing of tumors in medical images.

    PubMed

    Skounakis, Emmanouil; Farmaki, Christina; Sakkalis, Vangelis; Roniotis, Alexandros; Banitsas, Konstantinos; Graf, Norbert; Marias, Konstantinos

    2010-01-01

    This paper presents a novel, open access interactive platform for 3D medical image analysis, simulation and visualization, focusing in oncology images. The platform was developed through constant interaction and feedback from expert clinicians integrating a thorough analysis of their requirements while having an ultimate goal of assisting in accurately delineating tumors. It allows clinicians not only to work with a large number of 3D tomographic datasets but also to efficiently annotate multiple regions of interest in the same session. Manual and semi-automatic segmentation techniques combined with integrated correction tools assist in the quick and refined delineation of tumors while different users can add different components related to oncology such as tumor growth and simulation algorithms for improving therapy planning. The platform has been tested by different users and over large number of heterogeneous tomographic datasets to ensure stability, usability, extensibility and robustness with promising results. the platform, a manual and tutorial videos are available at: http://biomodeling.ics.forth.gr. it is free to use under the GNU General Public License.

  16. An End-to-End simulator for the development of atmospheric corrections and temperature - emissivity separation algorithms in the TIR spectral domain

    NASA Astrophysics Data System (ADS)

    Rock, Gilles; Fischer, Kim; Schlerf, Martin; Gerhards, Max; Udelhoven, Thomas

    2017-04-01

    The development and optimization of image processing algorithms requires the availability of datasets depicting every step from earth surface to the sensor's detector. The lack of ground truth data obliges to develop algorithms on simulated data. The simulation of hyperspectral remote sensing data is a useful tool for a variety of tasks such as the design of systems, the understanding of the image formation process, and the development and validation of data processing algorithms. An end-to-end simulator has been set up consisting of a forward simulator, a backward simulator and a validation module. The forward simulator derives radiance datasets based on laboratory sample spectra, applies atmospheric contributions using radiative transfer equations, and simulates the instrument response using configurable sensor models. This is followed by the backward simulation branch, consisting of an atmospheric correction (AC), a temperature and emissivity separation (TES) or a hybrid AC and TES algorithm. An independent validation module allows the comparison between input and output dataset and the benchmarking of different processing algorithms. In this study, hyperspectral thermal infrared scenes of a variety of surfaces have been simulated to analyze existing AC and TES algorithms. The ARTEMISS algorithm was optimized and benchmarked against the original implementations. The errors in TES were found to be related to incorrect water vapor retrieval. The atmospheric characterization could be optimized resulting in increasing accuracies in temperature and emissivity retrieval. Airborne datasets of different spectral resolutions were simulated from terrestrial HyperCam-LW measurements. The simulated airborne radiance spectra were subjected to atmospheric correction and TES and further used for a plant species classification study analyzing effects related to noise and mixed pixels.

  17. Development of an interpretive simulation tool for the proton radiography technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levy, M. C., E-mail: levymc@stanford.edu; Lawrence Livermore National Laboratory, Livermore, California 94551; Ryutov, D. D.

    2015-03-15

    Proton radiography is a useful diagnostic of high energy density (HED) plasmas under active theoretical and experimental development. In this paper, we describe a new simulation tool that interacts realistic laser-driven point-like proton sources with three dimensional electromagnetic fields of arbitrary strength and structure and synthesizes the associated high resolution proton radiograph. The present tool’s numerical approach captures all relevant physics effects, including effects related to the formation of caustics. Electromagnetic fields can be imported from particle-in-cell or hydrodynamic codes in a streamlined fashion, and a library of electromagnetic field “primitives” is also provided. This latter capability allows users tomore » add a primitive, modify the field strength, rotate a primitive, and so on, while quickly generating a high resolution radiograph at each step. In this way, our tool enables the user to deconstruct features in a radiograph and interpret them in connection to specific underlying electromagnetic field elements. We show an example application of the tool in connection to experimental observations of the Weibel instability in counterstreaming plasmas, using ∼10{sup 8} particles generated from a realistic laser-driven point-like proton source, imaging fields which cover volumes of ∼10 mm{sup 3}. Insights derived from this application show that the tool can support understanding of HED plasmas.« less

  18. Monte Carlo simulations of the dose from imaging with GE eXplore 120 micro-CT using GATE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bretin, Florian; Bahri, Mohamed Ali; Luxen, André

    Purpose: Small animals are increasingly used as translational models in preclinical imaging studies involving microCT, during which the subjects can be exposed to large amounts of radiation. While the radiation levels are generally sublethal, studies have shown that low-level radiation can change physiological parameters in mice. In order to rule out any influence of radiation on the outcome of such experiments, or resulting deterministic effects in the subjects, the levels of radiation involved need to be addressed. The aim of this study was to investigate the radiation dose delivered by the GE eXplore 120 microCT non-invasively using Monte Carlo simulationsmore » in GATE and to compare results to previously obtained experimental values. Methods: Tungsten X-ray spectra were simulated at 70, 80, and 97 kVp using an analytical tool and their half-value layers were simulated for spectra validation against experimentally measured values of the physical X-ray tube. A Monte Carlo model of the microCT system was set up and four protocols that are regularly applied to live animal scanning were implemented. The computed tomography dose index (CTDI) inside a PMMA phantom was derived and multiple field of view acquisitions were simulated using the PMMA phantom, a representative mouse and rat. Results: Simulated half-value layers agreed with experimentally obtained results within a 7% error window. The CTDI ranged from 20 to 56 mGy and closely matched experimental values. Derived organ doses in mice reached 459 mGy in bones and up to 200 mGy in soft tissue organs using the highest energy protocol. Dose levels in rats were lower due to the increased mass of the animal compared to mice. The uncertainty of all dose simulations was below 14%. Conclusions: Monte Carlo simulations proved a valuable tool to investigate the 3D dose distribution in animals from microCT. Small animals, especially mice (due to their small volume), receive large amounts of radiation from the GE eXplore 120 microCT, which might alter physiological parameters in a longitudinal study setup.« less

  19. Volume 10, Issue 11-12© 2001 WILEY-VCH Verlag Berlin GmbH, Fed. Rep. of GermanySave Title to My Profile

    E-MailPrint

    Volume 10, Issue 11-12, Pages 887-984(November 2001)

    Original Paper

    Imaging of atomic orbitals with the Atomic Force Microscope - experiments and simulations

    NASA Astrophysics Data System (ADS)

    Giessibl, F. J.; Bielefeldt, H.; Hembacher, S.; Mannhart, J.

    2001-11-01

    Atomic force microscopy (AFM) is a mechanical profiling technique that allows to image surfaces with atomic resolution. Recent progress in reducing the noise of this technique has led to a resolution level where previously undetectable symmetries of the images of single atoms are observed. These symmetries are related to the nature of the interatomic forces. The Si(111)-(7 × 7) surface is studied by AFM with various tips and AFM images are simulated with chemical and electrostatic model forces. The calculation of images from the tip-sample forces is explained in detail and the implications of the imaging parameters are discussed. Because the structure of the Si(111)-(7 × 7) surface is known very well, the shape of the adatom images is used to determine the tip structure. The observability of atomic orbitals by AFM and scanning tunneling microscopy is discussed.

  20. BioVEC: a program for biomolecule visualization with ellipsoidal coarse-graining.

    PubMed

    Abrahamsson, Erik; Plotkin, Steven S

    2009-09-01

    Biomolecule Visualization with Ellipsoidal Coarse-graining (BioVEC) is a tool for visualizing molecular dynamics simulation data while allowing coarse-grained residues to be rendered as ellipsoids. BioVEC reads in configuration files, which may be output from molecular dynamics simulations that include orientation output in either quaternion or ANISOU format, and can render frames of the trajectory in several common image formats for subsequent concatenation into a movie file. The BioVEC program is written in C++, uses the OpenGL API for rendering, and is open source. It is lightweight, allows for user-defined settings for and texture, and runs on either Windows or Linux platforms.

  1. Automatic analysis of stereoscopic satellite image pairs for determination of cloud-top height and structure

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.; Strong, J.; Woodward, R. H.; Pierce, H.

    1991-01-01

    Results are presented on an automatic stereo analysis of cloud-top heights from nearly simultaneous satellite image pairs from the GOES and NOAA satellites, using a massively parallel processor computer. Comparisons of computer-derived height fields and manually analyzed fields show that the automatic analysis technique shows promise for performing routine stereo analysis in a real-time environment, providing a useful forecasting tool by augmenting observational data sets of severe thunderstorms and hurricanes. Simulations using synthetic stereo data show that it is possible to automatically resolve small-scale features such as 4000-m-diam clouds to about 1500 m in the vertical.

  2. A propagation tool to connect remote-sensing observations with in-situ measurements of heliospheric structures

    NASA Astrophysics Data System (ADS)

    Rouillard, A. P.; Lavraud, B.; Génot, V.; Bouchemit, M.; Dufourg, N.; Plotnikov, I.; Pinto, R. F.; Sanchez-Diaz, E.; Lavarra, M.; Penou, M.; Jacquey, C.; André, N.; Caussarieu, S.; Toniutti, J.-P.; Popescu, D.; Buchlin, E.; Caminade, S.; Alingery, P.; Davies, J. A.; Odstrcil, D.; Mays, L.

    2017-11-01

    The remoteness of the Sun and the harsh conditions prevailing in the solar corona have so far limited the observational data used in the study of solar physics to remote-sensing observations taken either from the ground or from space. In contrast, the 'solar wind laboratory' is directly measured in situ by a fleet of spacecraft measuring the properties of the plasma and magnetic fields at specific points in space. Since 2007, the solar-terrestrial relations observatory (STEREO) has been providing images of the solar wind that flows between the solar corona and spacecraft making in-situ measurements. This has allowed scientists to directly connect processes imaged near the Sun with the subsequent effects measured in the solar wind. This new capability prompted the development of a series of tools and techniques to track heliospheric structures through space. This article presents one of these tools, a web-based interface called the 'Propagation Tool' that offers an integrated research environment to study the evolution of coronal and solar wind structures, such as Coronal Mass Ejections (CMEs), Corotating Interaction Regions (CIRs) and Solar Energetic Particles (SEPs). These structures can be propagated from the Sun outwards to or alternatively inwards from planets and spacecraft situated in the inner and outer heliosphere. In this paper, we present the global architecture of the tool, discuss some of the assumptions made to simulate the evolution of the structures and show how the tool connects to different databases.

  3. Efficient volumetric estimation from plenoptic data

    NASA Astrophysics Data System (ADS)

    Anglin, Paul; Reeves, Stanley J.; Thurow, Brian S.

    2013-03-01

    The commercial release of the Lytro camera, and greater availability of plenoptic imaging systems in general, have given the image processing community cost-effective tools for light-field imaging. While this data is most commonly used to generate planar images at arbitrary focal depths, reconstruction of volumetric fields is also possible. Similarly, deconvolution is a technique that is conventionally used in planar image reconstruction, or deblurring, algorithms. However, when leveraged with the ability of a light-field camera to quickly reproduce multiple focal planes within an imaged volume, deconvolution offers a computationally efficient method of volumetric reconstruction. Related research has shown than light-field imaging systems in conjunction with tomographic reconstruction techniques are also capable of estimating the imaged volume and have been successfully applied to particle image velocimetry (PIV). However, while tomographic volumetric estimation through algorithms such as multiplicative algebraic reconstruction techniques (MART) have proven to be highly accurate, they are computationally intensive. In this paper, the reconstruction problem is shown to be solvable by deconvolution. Deconvolution offers significant improvement in computational efficiency through the use of fast Fourier transforms (FFTs) when compared to other tomographic methods. This work describes a deconvolution algorithm designed to reconstruct a 3-D particle field from simulated plenoptic data. A 3-D extension of existing 2-D FFT-based refocusing techniques is presented to further improve efficiency when computing object focal stacks and system point spread functions (PSF). Reconstruction artifacts are identified; their underlying source and methods of mitigation are explored where possible, and reconstructions of simulated particle fields are provided.

  4. Visualization in simulation tools: requirements and a tool specification to support the teaching of dynamic biological processes.

    PubMed

    Jørgensen, Katarina M; Haddow, Pauline C

    2011-08-01

    Simulation tools are playing an increasingly important role behind advances in the field of systems biology. However, the current generation of biological science students has either little or no experience with such tools. As such, this educational glitch is limiting both the potential use of such tools as well as the potential for tighter cooperation between the designers and users. Although some simulation tool producers encourage their use in teaching, little attempt has hitherto been made to analyze and discuss their suitability as an educational tool for noncomputing science students. In general, today's simulation tools assume that the user has a stronger mathematical and computing background than that which is found in most biological science curricula, thus making the introduction of such tools a considerable pedagogical challenge. This paper provides an evaluation of the pedagogical attributes of existing simulation tools for cell signal transduction based on Cognitive Load theory. Further, design recommendations for an improved educational simulation tool are provided. The study is based on simulation tools for cell signal transduction. However, the discussions are relevant to a broader biological simulation tool set.

  5. Spinoff 2010

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Topics covered include: Burnishing Techniques Strengthen Hip Implants; Signal Processing Methods Monitor Cranial Pressure; Ultraviolet-Blocking Lenses Protect, Enhance Vision; Hyperspectral Systems Increase Imaging Capabilities; Programs Model the Future of Air Traffic Management; Tail Rotor Airfoils Stabilize Helicopters, Reduce Noise; Personal Aircraft Point to the Future of Transportation; Ducted Fan Designs Lead to Potential New Vehicles; Winglets Save Billions of Dollars in Fuel Costs; Sensor Systems Collect Critical Aerodynamics Data; Coatings Extend Life of Engines and Infrastructure; Radiometers Optimize Local Weather Prediction; Energy-Efficient Systems Eliminate Icing Danger for UAVs; Rocket-Powered Parachutes Rescue Entire Planes; Technologies Advance UAVs for Science, Military; Inflatable Antennas Support Emergency Communication; Smart Sensors Assess Structural Health; Hand-Held Devices Detect Explosives and Chemical Agents; Terahertz Tools Advance Imaging for Security, Industry; LED Systems Target Plant Growth; Aerogels Insulate Against Extreme Temperatures; Image Sensors Enhance Camera Technologies; Lightweight Material Patches Allow for Quick Repairs; Nanomaterials Transform Hairstyling Tools; Do-It-Yourself Additives Recharge Auto Air Conditioning; Systems Analyze Water Quality in Real Time; Compact Radiometers Expand Climate Knowledge; Energy Servers Deliver Clean, Affordable Power; Solutions Remediate Contaminated Groundwater; Bacteria Provide Cleanup of Oil Spills, Wastewater; Reflective Coatings Protect People and Animals; Innovative Techniques Simplify Vibration Analysis; Modeling Tools Predict Flow in Fluid Dynamics; Verification Tools Secure Online Shopping, Banking; Toolsets Maintain Health of Complex Systems; Framework Resources Multiply Computing Power; Tools Automate Spacecraft Testing, Operation; GPS Software Packages Deliver Positioning Solutions; Solid-State Recorders Enhance Scientific Data Collection; Computer Models Simulate Fine Particle Dispersion; Composite Sandwich Technologies Lighten Components; Cameras Reveal Elements in the Short Wave Infrared; Deformable Mirrors Correct Optical Distortions; Stitching Techniques Advance Optics Manufacturing; Compact, Robust Chips Integrate Optical Functions; Fuel Cell Stations Automate Processes, Catalyst Testing; Onboard Systems Record Unique Videos of Space Missions; Space Research Results Purify Semiconductor Materials; and Toolkits Control Motion of Complex Robotics.

  6. Using three-dimensional-computerized tomography as a diagnostic tool for temporo-mandibular joint ankylosis: a case report.

    PubMed

    Kao, S Y; Chou, J; Lo, J; Yang, J; Chou, A P; Joe, C J; Chang, R C

    1999-04-01

    Roentgenographic examination has long been a useful diagnostic tool for temporo-mandibular joint (TMJ) disease. The methods include TMJ tomography, panoramic radiography and computerized tomography (CT) scan with or without injection of contrast media. Recently, three-dimensional CT (3D-CT), reconstructed from the two-dimensional image of a CT scan to simulate the soft tissue or bony structure of the real target, was proposed. In this report, a case of TMJ ankylosis due to traumatic injury is presented. 3D-CT was employed as one of the presurgical roentgenographic diagnostic tools. The conventional radiographic examination including panoramic radiography and tomography showed lesions in both sides of the mandible. CT scanning further suggested that the right-sided lesion was more severe than that on the left. With 3D-CT image reconstruction the size and extent of the lesions were clearly observable. The decision was made to proceed with an initial surgical approach on the right side. With condylectomy and condylar replacement using an autogenous costochondral graft on the right side, the range of mouth opening improved significantly. In this case report, 3D-CT demonstrates its advantages as a tool for the correct and precise diagnosis of TMJ ankylosis.

  7. Development and validation of a biologically realistic tissue-mimicking material for photoacoustics and other bimodal optical-acoustic modalities

    NASA Astrophysics Data System (ADS)

    Vogt, William C.; Jia, Congxian; Wear, Keith A.; Garra, Brian S.; Pfefer, T. Joshua

    2017-03-01

    Recent years have seen rapid development of hybrid optical-acoustic imaging modalities with broad applications in research and clinical imaging, including photoacoustic tomography (PAT), photoacoustic microscopy, and ultrasound-modulated optical tomography. Tissue-mimicking phantoms are an important tool for objectively and quantitatively simulating in vivo imaging system performance. However, no standard tissue phantoms exist for such systems. One major challenge is the development of tissue-mimicking materials (TMMs) that are both highly stable and possess biologically realistic properties. To address this need, we have explored the use of various formulations of PVC plastisol (PVCP) based on varying mixtures of several liquid plasticizers. We developed a custom PVCP formulation with optical absorption and scattering coefficients, speed of sound, and acoustic attenuation that are tunable and tissue-relevant. This TMM can simulate different tissue compositions and offers greater mechanical strength than hydrogels. Optical properties of PVCP samples with varying composition were characterized using integrating sphere spectrophotometry and the inverse adding-doubling method. Acoustic properties were determined using a broadband pulse-transmission technique. To demonstrate the utility of this bimodal TMM, we constructed an image quality phantom designed to enable quantitative evaluation of PAT spatial resolution. The phantom was imaged using a custom combined PAT-ultrasound imaging system. Results indicated that this more biologically realistic TMM produced performance trends not captured in simpler liquid phantoms. In the future, this TMM may be broadly utilized for performance evaluation of optical, acoustic, and hybrid optical-acoustic imaging systems.

  8. A finite element head and neck model as a supportive tool for deformable image registration.

    PubMed

    Kim, Jihun; Saitou, Kazuhiro; Matuszak, Martha M; Balter, James M

    2016-07-01

    A finite element (FE) head and neck model was developed as a tool to aid investigations and development of deformable image registration and patient modeling in radiation oncology. Useful aspects of a FE model for these purposes include ability to produce realistic deformations (similar to those seen in patients over the course of treatment) and a rational means of generating new configurations, e.g., via the application of force and/or displacement boundary conditions. The model was constructed based on a cone-beam computed tomography image of a head and neck cancer patient. The three-node triangular surface meshes created for the bony elements (skull, mandible, and cervical spine) and joint elements were integrated into a skeletal system and combined with the exterior surface. Nodes were additionally created inside the surface structures which were composed of the three-node triangular surface meshes, so that four-node tetrahedral FE elements were created over the whole region of the model. The bony elements were modeled as a homogeneous linear elastic material connected by intervertebral disks. The surrounding tissues were modeled as a homogeneous linear elastic material. Under force or displacement boundary conditions, FE analysis on the model calculates approximate solutions of the displacement vector field. A FE head and neck model was constructed that skull, mandible, and cervical vertebrae were mechanically connected by disks. The developed FE model is capable of generating realistic deformations that are strain-free for the bony elements and of creating new configurations of the skeletal system with the surrounding tissues reasonably deformed. The FE model can generate realistic deformations for skeletal elements. In addition, the model provides a way of evaluating the accuracy of image alignment methods by producing a ground truth deformation and correspondingly simulated images. The ability to combine force and displacement conditions provides flexibility for simulating realistic anatomic configurations.

  9. Quantitative characterization of edge enhancement in phase contrast x-ray imaging.

    PubMed

    Monnin, P; Bulling, S; Hoszowska, J; Valley, J F; Meuli, R; Verdun, F R

    2004-06-01

    The aim of this study was to model the edge enhancement effect in in-line holography phase contrast imaging. A simple analytical approach was used to quantify refraction and interference contrasts in terms of beam energy and imaging geometry. The model was applied to predict the peak intensity and frequency of the edge enhancement for images of cylindrical fibers. The calculations were compared with measurements, and the relationship between the spatial resolution of the detector and the amplitude of the phase contrast signal was investigated. Calculations using the analytical model were in good agreement with experimental results for nylon, aluminum and copper wires of 50 to 240 microm diameter, and with numerical simulations based on Fresnel-Kirchhoff theory. A relationship between the defocusing distance and the pixel size of the image detector was established. This analytical model is a useful tool for optimizing imaging parameters in phase contrast in-line holography, including defocusing distance, detector resolution and beam energy.

  10. Super-resolution imaging of multiple cells by optimized flat-field epi-illumination

    NASA Astrophysics Data System (ADS)

    Douglass, Kyle M.; Sieben, Christian; Archetti, Anna; Lambert, Ambroise; Manley, Suliana

    2016-11-01

    Biological processes are inherently multi-scale, and supramolecular complexes at the nanoscale determine changes at the cellular scale and beyond. Single-molecule localization microscopy (SMLM) techniques have been established as important tools for studying cellular features with resolutions of the order of around 10 nm. However, in their current form these modalities are limited by a highly constrained field of view (FOV) and field-dependent image resolution. Here, we develop a low-cost microlens array (MLA)-based epi-illumination system—flat illumination for field-independent imaging (FIFI)—that can efficiently and homogeneously perform simultaneous imaging of multiple cells with nanoscale resolution. The optical principle of FIFI, which is an extension of the Köhler integrator, is further elucidated and modelled with a new, free simulation package. We demonstrate FIFI's capabilities by imaging multiple COS-7 and bacteria cells in 100 × 100 μm2 SMLM images—more than quadrupling the size of a typical FOV and producing near-gigapixel-sized images of uniformly high quality.

  11. Imaging Internal Structure of Long Bones Using Wave Scattering Theory.

    PubMed

    Zheng, Rui; Le, Lawrence H; Sacchi, Mauricio D; Lou, Edmond

    2015-11-01

    An ultrasonic wavefield imaging method is developed to reconstruct the internal geometric properties of long bones using zero-offset data acquired axially on the bone surface. The imaging algorithm based on Born scattering theory is implemented with the conjugate gradient iterative method to reconstruct an optimal image. In the case of a multilayered velocity model, ray tracing through a smooth medium is used to calculate the traveled distance and traveling time. The method has been applied to simulated and real data. The results indicate that the interfaces of the top cortex are accurately imaged and correspond favorably to the original model. The reconstructed bottom cortex below the marrow is less accurate mainly because of the low signal-to-noise ratio. The current imaging method has successfully recovered the top cortical layer, providing a potential tool to investigate the internal structures of long bone cortex for osteoporosis assessment. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  12. Post-processing images from the WFIRST-AFTA coronagraph testbed

    NASA Astrophysics Data System (ADS)

    Zimmerman, Neil T.; Ygouf, Marie; Pueyo, Laurent; Soummer, Remi; Perrin, Marshall D.; Mennesson, Bertrand; Cady, Eric; Mejia Prada, Camilo

    2016-01-01

    The concept for the exoplanet imaging instrument on WFIRST-AFTA relies on the development of mission-specific data processing tools to reduce the speckle noise floor. No instruments have yet functioned on the sky in the planet-to-star contrast regime of the proposed coronagraph (1E-8). Therefore, starlight subtraction algorithms must be tested on a combination of simulated and laboratory data sets to give confidence that the scientific goals can be reached. The High Contrast Imaging Testbed (HCIT) at Jet Propulsion Lab has carried out several technology demonstrations for the instrument concept, demonstrating 1E-8 raw (absolute) contrast. Here, we have applied a mock reference differential imaging strategy to HCIT data sets, treating one subset of images as a reference star observation and another subset as a science target observation. We show that algorithms like KLIP (Karhunen-Loève Image Projection), by suppressing residual speckles, enable the recovery of exoplanet signals at contrast of order 2E-9.

  13. A semiempirical linear model of indirect, flat-panel x-ray detectors.

    PubMed

    Huang, Shih-Ying; Yang, Kai; Abbey, Craig K; Boone, John M

    2012-04-01

    It is important to understand signal and noise transfer in the indirect, flat-panel x-ray detector when developing and optimizing imaging systems. For optimization where simulating images is necessary, this study introduces a semiempirical model to simulate projection images with user-defined x-ray fluence interaction. The signal and noise transfer in the indirect, flat-panel x-ray detectors is characterized by statistics consistent with energy-integration of x-ray photons. For an incident x-ray spectrum, x-ray photons are attenuated and absorbed in the x-ray scintillator to produce light photons, which are coupled to photodiodes for signal readout. The signal mean and variance are linearly related to the energy-integrated x-ray spectrum by empirically determined factors. With the known first- and second-order statistics, images can be simulated by incorporating multipixel signal statistics and the modulation transfer function of the imaging system. To estimate the semiempirical input to this model, 500 projection images (using an indirect, flat-panel x-ray detector in the breast CT system) were acquired with 50-100 kilovolt (kV) x-ray spectra filtered with 0.1-mm tin (Sn), 0.2-mm copper (Cu), 1.5-mm aluminum (Al), or 0.05-mm silver (Ag). The signal mean and variance of each detector element and the noise power spectra (NPS) were calculated and incorporated into this model for accuracy. Additionally, the modulation transfer function of the detector system was physically measured and incorporated in the image simulation steps. For validation purposes, simulated and measured projection images of air scans were compared using 40 kV∕0.1-mm Sn, 65 kV∕0.2-mm Cu, 85 kV∕1.5-mm Al, and 95 kV∕0.05-mm Ag. The linear relationship between the measured signal statistics and the energy-integrated x-ray spectrum was confirmed and incorporated into the model. The signal mean and variance factors were linearly related to kV for each filter material (r(2) of signal mean to kV: 0.91, 0.93, 0.86, and 0.99 for 0.1-mm Sn, 0.2-mm Cu, 1.5-mm Al, and 0.05-mm Ag, respectively; r(2) of signal variance to kV: 0.99 for all four filters). The comparison of the signal and noise (mean, variance, and NPS) between the simulated and measured air scan images suggested that this model was reasonable in predicting accurate signal statistics of air scan images using absolute percent error. Overall, the model was found to be accurate in estimating signal statistics and spatial correlation between the detector elements of the images acquired with indirect, flat-panel x-ray detectors. The semiempirical linear model of the indirect, flat-panel x-ray detectors was described and validated with images of air scans. The model was found to be a useful tool in understanding the signal and noise transfer within indirect, flat-panel x-ray detector systems.

  14. NEMA NU 4-2008 validation and applications of the PET-SORTEO Monte Carlo simulations platform for the geometry of the Inveon PET preclinical scanner

    NASA Astrophysics Data System (ADS)

    Boisson, F.; Wimberley, C. J.; Lehnert, W.; Zahra, D.; Pham, T.; Perkins, G.; Hamze, H.; Gregoire, M.-C.; Reilhac, A.

    2013-10-01

    Monte Carlo-based simulation of positron emission tomography (PET) data plays a key role in the design and optimization of data correction and processing methods. Our first aim was to adapt and configure the PET-SORTEO Monte Carlo simulation program for the geometry of the widely distributed Inveon PET preclinical scanner manufactured by Siemens Preclinical Solutions. The validation was carried out against actual measurements performed on the Inveon PET scanner at the Australian Nuclear Science and Technology Organisation in Australia and at the Brain & Mind Research Institute and by strictly following the NEMA NU 4-2008 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction and count rates, image quality and Derenzo phantom studies. Results showed that PET-SORTEO reliably reproduces the performances of this Inveon preclinical system. In addition, imaging studies showed that the PET-SORTEO simulation program provides raw data for the Inveon scanner that can be fully corrected and reconstructed using the same programs as for the actual data. All correction techniques (attenuation, scatter, randoms, dead-time, and normalization) can be applied on the simulated data leading to fully quantitative reconstructed images. In the second part of the study, we demonstrated its ability to generate fast and realistic biological studies. PET-SORTEO is a workable and reliable tool that can be used, in a classical way, to validate and/or optimize a single PET data processing step such as a reconstruction method. However, we demonstrated that by combining a realistic simulated biological study ([11C]Raclopride here) involving different condition groups, simulation allows one also to assess and optimize the data correction, reconstruction and data processing line flow as a whole, specifically for each biological study, which is our ultimate intent.

  15. Neutron imaging with bubble chambers for inertial confinement fusion

    NASA Astrophysics Data System (ADS)

    Ghilea, Marian C.

    One of the main methods to obtain energy from controlled thermonuclear fusion is inertial confinement fusion (ICF), a process where nuclear fusion reactions are initiated by heating and compressing a fuel target, typically in the form of a pellet that contains deuterium and tritium, relying on the inertia of the fuel mass to provide confinement. In inertial confinement fusion experiments, it is important to distinguish failure mechanisms of the imploding capsule and unambiguously diagnose compression and hot spot formation in the fuel. Neutron imaging provides such a technique and bubble chambers are capable of generating higher resolution images than other types of neutron detectors. This thesis explores the use of a liquid bubble chamber to record high yield 14.1 MeV neutrons resulting from deuterium-tritium fusion reactions on ICF experiments. A design tool to deconvolve and reconstruct penumbral and pinhole neutron images was created, using an original ray tracing concept to simulate the neutron images. The design tool proved that misalignment and aperture fabrication errors can significantly decrease the resolution of the reconstructed neutron image. A theoretical model to describe the mechanism of bubble formation was developed. A bubble chamber for neutron imaging with Freon 115 as active medium was designed and implemented for the OMEGA laser system. High neutron yields resulting from deuterium-tritium capsule implosions were recorded. The bubble density was too low for neutron imaging on OMEGA but agreed with the model of bubble formation. The research done in here shows that bubble detectors are a promising technology for the higher neutron yields expected at National Ignition Facility (NIF).

  16. DRACO-STEM: An Automatic Tool to Generate High-Quality 3D Meshes of Shoot Apical Meristem Tissue at Cell Resolution

    PubMed Central

    Cerutti, Guillaume; Ali, Olivier; Godin, Christophe

    2017-01-01

    Context: The shoot apical meristem (SAM), origin of all aerial organs of the plant, is a restricted niche of stem cells whose growth is regulated by a complex network of genetic, hormonal and mechanical interactions. Studying the development of this area at cell level using 3D microscopy time-lapse imaging is a newly emerging key to understand the processes controlling plant morphogenesis. Computational models have been proposed to simulate those mechanisms, however their validation on real-life data is an essential step that requires an adequate representation of the growing tissue to be carried out. Achievements: The tool we introduce is a two-stage computational pipeline that generates a complete 3D triangular mesh of the tissue volume based on a segmented tissue image stack. DRACO (Dual Reconstruction by Adjacency Complex Optimization) is designed to retrieve the underlying 3D topological structure of the tissue and compute its dual geometry, while STEM (SAM Tissue Enhanced Mesh) returns a faithful triangular mesh optimized along several quality criteria (intrinsic quality, tissue reconstruction, visual adequacy). Quantitative evaluation tools measuring the performance of the method along those different dimensions are also provided. The resulting meshes can be used as input and validation for biomechanical simulations. Availability: DRACO-STEM is supplied as a package of the open-source multi-platform plant modeling library OpenAlea (http://openalea.github.io/) implemented in Python, and is freely distributed on GitHub (https://github.com/VirtualPlants/draco-stem) along with guidelines for installation and use. PMID:28424704

  17. Joint estimation of subject motion and tracer kinetic parameters of dynamic PET data in an EM framework

    NASA Astrophysics Data System (ADS)

    Jiao, Jieqing; Salinas, Cristian A.; Searle, Graham E.; Gunn, Roger N.; Schnabel, Julia A.

    2012-02-01

    Dynamic Positron Emission Tomography is a powerful tool for quantitative imaging of in vivo biological processes. The long scan durations necessitate motion correction, to maintain the validity of the dynamic measurements, which can be particularly challenging due to the low signal-to-noise ratio (SNR) and spatial resolution, as well as the complex tracer behaviour in the dynamic PET data. In this paper we develop a novel automated expectation-maximisation image registration framework that incorporates temporal tracer kinetic information to correct for inter-frame subject motion during dynamic PET scans. We employ the Zubal human brain phantom to simulate dynamic PET data using SORTEO (a Monte Carlo-based simulator), in order to validate the proposed method for its ability to recover imposed rigid motion. We have conducted a range of simulations using different noise levels, and corrupted the data with a range of rigid motion artefacts. The performance of our motion correction method is compared with pairwise registration using normalised mutual information as a voxel similarity measure (an approach conventionally used to correct for dynamic PET inter-frame motion based solely on intensity information). To quantify registration accuracy, we calculate the target registration error across the images. The results show that our new dynamic image registration method based on tracer kinetics yields better realignment of the simulated datasets, halving the target registration error when compared to the conventional method at small motion levels, as well as yielding smaller residuals in translation and rotation parameters. We also show that our new method is less affected by the low signal in the first few frames, which the conventional method based on normalised mutual information fails to realign.

  18. ROCKETSHIP: a flexible and modular software tool for the planning, processing and analysis of dynamic MRI studies.

    PubMed

    Barnes, Samuel R; Ng, Thomas S C; Santa-Maria, Naomi; Montagne, Axel; Zlokovic, Berislav V; Jacobs, Russell E

    2015-06-16

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a promising technique to characterize pathology and evaluate treatment response. However, analysis of DCE-MRI data is complex and benefits from concurrent analysis of multiple kinetic models and parameters. Few software tools are currently available that specifically focuses on DCE-MRI analysis with multiple kinetic models. Here, we developed ROCKETSHIP, an open-source, flexible and modular software for DCE-MRI analysis. ROCKETSHIP incorporates analyses with multiple kinetic models, including data-driven nested model analysis. ROCKETSHIP was implemented using the MATLAB programming language. Robustness of the software to provide reliable fits using multiple kinetic models is demonstrated using simulated data. Simulations also demonstrate the utility of the data-driven nested model analysis. Applicability of ROCKETSHIP for both preclinical and clinical studies is shown using DCE-MRI studies of the human brain and a murine tumor model. A DCE-MRI software suite was implemented and tested using simulations. Its applicability to both preclinical and clinical datasets is shown. ROCKETSHIP was designed to be easily accessible for the beginner, but flexible enough for changes or additions to be made by the advanced user as well. The availability of a flexible analysis tool will aid future studies using DCE-MRI. A public release of ROCKETSHIP is available at https://github.com/petmri/ROCKETSHIP .

  19. Finite element analysis simulations for ultrasonic array NDE inspections

    NASA Astrophysics Data System (ADS)

    Dobson, Jeff; Tweedie, Andrew; Harvey, Gerald; O'Leary, Richard; Mulholland, Anthony; Tant, Katherine; Gachagan, Anthony

    2016-02-01

    Advances in manufacturing techniques and materials have led to an increase in the demand for reliable and robust inspection techniques to maintain safety critical features. The application of modelling methods to develop and evaluate inspections is becoming an essential tool for the NDE community. Current analytical methods are inadequate for simulation of arbitrary components and heterogeneous materials, such as anisotropic welds or composite structures. Finite element analysis software (FEA), such as PZFlex, can provide the ability to simulate the inspection of these arrangements, providing the ability to economically prototype and evaluate improved NDE methods. FEA is often seen as computationally expensive for ultrasound problems however, advances in computing power have made it a more viable tool. This paper aims to illustrate the capability of appropriate FEA to produce accurate simulations of ultrasonic array inspections - minimizing the requirement for expensive test-piece fabrication. Validation is afforded via corroboration of the FE derived and experimentally generated data sets for a test-block comprising 1D and 2D defects. The modelling approach is extended to consider the more troublesome aspects of heterogeneous materials where defect dimensions can be of the same length scale as the grain structure. The model is used to facilitate the implementation of new ultrasonic array inspection methods for such materials. This is exemplified by considering the simulation of ultrasonic NDE in a weld structure in order to assess new approaches to imaging such structures.

  20. Challenges of NDE simulation tool validation, optimization, and utilization for composites

    NASA Astrophysics Data System (ADS)

    Leckey, Cara A. C.; Seebo, Jeffrey P.; Juarez, Peter

    2016-02-01

    Rapid, realistic nondestructive evaluation (NDE) simulation tools can aid in inspection optimization and prediction of inspectability for advanced aerospace materials and designs. NDE simulation tools may someday aid in the design and certification of aerospace components; potentially shortening the time from material development to implementation by industry and government. Furthermore, ultrasound modeling and simulation are expected to play a significant future role in validating the capabilities and limitations of guided wave based structural health monitoring (SHM) systems. The current state-of-the-art in ultrasonic NDE/SHM simulation is still far from the goal of rapidly simulating damage detection techniques for large scale, complex geometry composite components/vehicles containing realistic damage types. Ongoing work at NASA Langley Research Center is focused on advanced ultrasonic simulation tool development. This paper discusses challenges of simulation tool validation, optimization, and utilization for composites. Ongoing simulation tool development work is described along with examples of simulation validation and optimization challenges that are more broadly applicable to all NDE simulation tools. The paper will also discuss examples of simulation tool utilization at NASA to develop new damage characterization methods for composites, and associated challenges in experimentally validating those methods.

  1. Simulated Design Strategies for SPECT Collimators to Reduce the Eddy Currents Induced by MRI Gradient Fields

    NASA Astrophysics Data System (ADS)

    Samoudi, Amine M.; Van Audenhaege, Karen; Vermeeren, Günter; Verhoyen, Gregory; Martens, Luc; Van Holen, Roel; Joseph, Wout

    2015-10-01

    Combining single photon emission computed tomography (SPECT) with magnetic resonance imaging (MRI) requires the insertion of highly conductive SPECT collimators inside the MRI scanner, resulting in an induced eddy current disturbing the combined system. We reduced the eddy currents due to the insert of a novel tungsten collimator inside transverse and longitudinal gradient coils. The collimator was produced with metal additive manufacturing, that is part of a microSPECT insert for a preclinical SPECT/MRI scanner. We characterized the induced magnetic field due to the gradient field and adapted the collimators to reduce the induced eddy currents. We modeled the x-, y-, and z-gradient coil and the different collimator designs and simulated them with FEKO, a three-dimensional method of moments / finite element methods (MoM/FEM) full-wave simulation tool. We used a time analysis approach to generate the pulsed magnetic field gradient. Simulation results show that the maximum induced field can be reduced by 50.82% in the final design bringing the maximum induced magnetic field to less than 2% of the applied gradient for all the gradient coils. The numerical model was validated with measurements and was proposed as a tool for studying the effect of a SPECT collimator within the MRI gradient coils.

  2. An Evaluation of Fractal Surface Measurement Methods for Characterizing Landscape Complexity from Remote-Sensing Imagery

    NASA Technical Reports Server (NTRS)

    Lam, Nina Siu-Ngan; Qiu, Hong-Lie; Quattrochi, Dale A.; Emerson, Charles W.; Arnold, James E. (Technical Monitor)

    2001-01-01

    The rapid increase in digital data volumes from new and existing sensors necessitates the need for efficient analytical tools for extracting information. We developed an integrated software package called ICAMS (Image Characterization and Modeling System) to provide specialized spatial analytical functions for interpreting remote sensing data. This paper evaluates the three fractal dimension measurement methods: isarithm, variogram, and triangular prism, along with the spatial autocorrelation measurement methods Moran's I and Geary's C, that have been implemented in ICAMS. A modified triangular prism method was proposed and implemented. Results from analyzing 25 simulated surfaces having known fractal dimensions show that both the isarithm and triangular prism methods can accurately measure a range of fractal surfaces. The triangular prism method is most accurate at estimating the fractal dimension of higher spatial complexity, but it is sensitive to contrast stretching. The variogram method is a comparatively poor estimator for all of the surfaces, particularly those with higher fractal dimensions. Similar to the fractal techniques, the spatial autocorrelation techniques are found to be useful to measure complex images but not images with low dimensionality. These fractal measurement methods can be applied directly to unclassified images and could serve as a tool for change detection and data mining.

  3. When structure affects function--the need for partial volume effect correction in functional and resting state magnetic resonance imaging studies.

    PubMed

    Dukart, Juergen; Bertolino, Alessandro

    2014-01-01

    Both functional and also more recently resting state magnetic resonance imaging have become established tools to investigate functional brain networks. Most studies use these tools to compare different populations without controlling for potential differences in underlying brain structure which might affect the functional measurements of interest. Here, we adapt a simulation approach combined with evaluation of real resting state magnetic resonance imaging data to investigate the potential impact of partial volume effects on established functional and resting state magnetic resonance imaging analyses. We demonstrate that differences in the underlying structure lead to a significant increase in detected functional differences in both types of analyses. Largest increases in functional differences are observed for highest signal-to-noise ratios and when signal with the lowest amount of partial volume effects is compared to any other partial volume effect constellation. In real data, structural information explains about 25% of within-subject variance observed in degree centrality--an established resting state connectivity measurement. Controlling this measurement for structural information can substantially alter correlational maps obtained in group analyses. Our results question current approaches of evaluating these measurements in diseased population with known structural changes without controlling for potential differences in these measurements.

  4. Direct estimation of evoked hemoglobin changes by multimodality fusion imaging

    PubMed Central

    Huppert, Theodore J.; Diamond, Solomon G.; Boas, David A.

    2009-01-01

    In the last two decades, both diffuse optical tomography (DOT) and blood oxygen level dependent (BOLD)-based functional magnetic resonance imaging (fMRI) methods have been developed as noninvasive tools for imaging evoked cerebral hemodynamic changes in studies of brain activity. Although these two technologies measure functional contrast from similar physiological sources, i.e., changes in hemoglobin levels, these two modalities are based on distinct physical and biophysical principles leading to both limitations and strengths to each method. In this work, we describe a unified linear model to combine the complimentary spatial, temporal, and spectroscopic resolutions of concurrently measured optical tomography and fMRI signals. Using numerical simulations, we demonstrate that concurrent optical and BOLD measurements can be used to create cross-calibrated estimates of absolute micromolar deoxyhemoglobin changes. We apply this new analysis tool to experimental data acquired simultaneously with both DOT and BOLD imaging during a motor task, demonstrate the ability to more robustly estimate hemoglobin changes in comparison to DOT alone, and show how this approach can provide cross-calibrated estimates of hemoglobin changes. Using this multimodal method, we estimate the calibration of the 3 tesla BOLD signal to be −0.55% ± 0.40% signal change per micromolar change of deoxyhemoglobin. PMID:19021411

  5. Economical Sponge Phantom for Teaching, Understanding, and Researching A- and B-Line Reverberation Artifacts in Lung Ultrasound.

    PubMed

    Blüthgen, Christian; Sanabria, Sergio; Frauenfelder, Thomas; Klingmüller, Volker; Rominger, Marga

    2017-10-01

    This project evaluated a low-cost sponge phantom setup for its capability to teach and study A- and B-line reverberation artifacts known from lung ultrasound and to numerically simulate sound wave interaction with the phantom using a finite-difference time-domain (FDTD) model. Both A- and B-line artifacts were reproducible on B-mode ultrasound imaging as well as in the FDTD-based simulation. The phantom was found to be an easy-to-set up and economical tool for understanding, teaching, and researching A- and B-line artifacts occurring in lung ultrasound. The FDTD method-based simulation was able to reproduce the artifacts and provides intuitive insight into the underlying physics. © 2017 by the American Institute of Ultrasound in Medicine.

  6. Properties of Interstellar Turbulence from Gradients of Linear Polarization Maps

    NASA Astrophysics Data System (ADS)

    Burkhart, Blakesley; Lazarian, A.; Gaensler, B. M.

    2012-04-01

    Faraday rotation of linearly polarized radio signals provides a very sensitive probe of fluctuations in the interstellar magnetic field and ionized gas density resulting from magnetohydrodynamic (MHD) turbulence. We used a set of statistical tools to analyze images of the spatial gradient of linearly polarized radio emission (|∇P|) for both observational data from a test image of the Southern Galactic Plane Survey (SGPS) and isothermal three-dimensional simulations of MHD turbulence. Visually, in both observations and simulations, a complex network of filamentary structures is seen. Our analysis shows that the filaments in |∇P| can be produced both by interacting shocks and random fluctuations characterizing the non-differentiable field of MHD turbulence. The latter dominates for subsonic turbulence, while the former is only present in supersonic turbulence. We show that supersonic and subsonic turbulence exhibit different distributions as well as different morphologies in the maps of |∇P|. Particularly, filaments produced by shocks show a characteristic "double jump" profile at the sites of shock fronts resulting from delta function-like increases in the density and/or magnetic field, while those produced by subsonic turbulence show a single jump profile. In order to quantitatively characterize these differences, we use the topology tool known as the genus curve as well as the probability distribution function moments of the image distribution. We find that higher values for the moments correspond to cases of |∇P| with larger sonic Mach numbers. The genus analysis of the supersonic simulations of |∇P| reveals a "swiss cheese" topology, while the subsonic cases have characteristics of a "clump" topology. Based on the analysis of the genus and the higher order moments, the SGPS test region data have a distribution and morphology that match subsonic- to transonic-type turbulence, which confirms what is now expected for the warm ionized medium.

  7. 4D ML reconstruction as a tool for volumetric PET-based treatment verification in ion beam radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Bernardi, E., E-mail: elisabetta.debernardi@unimib.it; Ricotti, R.; Riboldi, M.

    2016-02-15

    Purpose: An innovative strategy to improve the sensitivity of positron emission tomography (PET)-based treatment verification in ion beam radiotherapy is proposed. Methods: Low counting statistics PET images acquired during or shortly after the treatment (Measured PET) and a Monte Carlo estimate of the same PET images derived from the treatment plan (Expected PET) are considered as two frames of a 4D dataset. A 4D maximum likelihood reconstruction strategy was adapted to iteratively estimate the annihilation events distribution in a reference frame and the deformation motion fields that map it in the Expected PET and Measured PET frames. The outputs generatedmore » by the proposed strategy are as follows: (1) an estimate of the Measured PET with an image quality comparable to the Expected PET and (2) an estimate of the motion field mapping Expected PET to Measured PET. The details of the algorithm are presented and the strategy is preliminarily tested on analytically simulated datasets. Results: The algorithm demonstrates (1) robustness against noise, even in the worst conditions where 1.5 × 10{sup 4} true coincidences and a random fraction of 73% are simulated; (2) a proper sensitivity to different kind and grade of mismatches ranging between 1 and 10 mm; (3) robustness against bias due to incorrect washout modeling in the Monte Carlo simulation up to 1/3 of the original signal amplitude; and (4) an ability to describe the mismatch even in presence of complex annihilation distributions such as those induced by two perpendicular superimposed ion fields. Conclusions: The promising results obtained in this work suggest the applicability of the method as a quantification tool for PET-based treatment verification in ion beam radiotherapy. An extensive assessment of the proposed strategy on real treatment verification data is planned.« less

  8. JADA: a graphical user interface for comprehensive internal dose assessment in nuclear medicine.

    PubMed

    Grimes, Joshua; Uribe, Carlos; Celler, Anna

    2013-07-01

    The main objective of this work was to design a comprehensive dosimetry package that would keep all aspects of internal dose calculation within the framework of a single software environment and that would be applicable for a variety of dose calculation approaches. Our MATLAB-based graphical user interface (GUI) can be used for processing data obtained using pure planar, pure SPECT, or hybrid planar/SPECT imaging. Time-activity data for source regions are obtained using a set of tools that allow the user to reconstruct SPECT images, load images, coregister a series of planar images, and to perform two-dimensional and three-dimensional image segmentation. Curve fits are applied to the acquired time-activity data to construct time-activity curves, which are then integrated to obtain time-integrated activity coefficients. Subsequently, dose estimates are made using one of three methods. The organ level dose calculation subGUI calculates mean organ doses that are equivalent to dose assessment performed by OLINDA/EXM. Voxelized dose calculation options, which include the voxel S value approach and Monte Carlo simulation using the EGSnrc user code DOSXYZnrc, are available within the process 3D image data subGUI. The developed internal dosimetry software package provides an assortment of tools for every step in the dose calculation process, eliminating the need for manual data transfer between programs. This saves times and minimizes user errors, while offering a versatility that can be used to efficiently perform patient-specific internal dose calculations in a variety of clinical situations.

  9. A simple blackbody simulator with several possibilities and applications on thermography

    NASA Astrophysics Data System (ADS)

    dos Santos, Laerte; Lemos, Alisson Maria; Abi-Ramia, Marco Antônio

    2016-05-01

    Originally designed to make the practical examination on thermography certification1 possible, the device presented in this paper has demonstrated to be a very useful and versatile didactic tool for training centers and educational institutions, it can also be used as a low cost blackbody simulator to verify calibration of radiometers. It is a simple device with several functionalities for studying and for applications on heat transfer and radiometry, among them the interesting ability to thermally simulate the surface of real objects. On that functionality, if the device is seen by a thermographic camera, it reproduces the surface apparent temperatures of the object that it is simulating, at the same time, if it is seen by a naked eye it shows a visible image of that same surface. This functionality makes the practical study in the classroom possible, from different areas such as electrical, mechanical, medical, building, veterinary, etc.

  10. Sampling Simulations for Assessing the Accuracy of U.S. Agricultural Crop Mapping from Remotely Sensed Imagery

    NASA Astrophysics Data System (ADS)

    Dwyer, Linnea; Yadav, Kamini; Congalton, Russell G.

    2017-04-01

    Providing adequate food and water for a growing, global population continues to be a major challenge. Mapping and monitoring crops are useful tools for estimating the extent of crop productivity. GFSAD30 (Global Food Security Analysis Data at 30m) is a program, funded by NASA, that is producing global cropland maps by using field measurements and remote sensing images. This program studies 8 major crop types, and includes information on cropland area/extent, if crops are irrigated or rainfed, and the cropping intensities. Using results from the US and the extensive reference data available, CDL (USDA Crop Data Layer), we will experiment with various sampling simulations to determine optimal sampling for thematic map accuracy assessment. These simulations will include varying the sampling unit, the sampling strategy, and the sample number. Results of these simulations will allow us to recommend assessment approaches to handle different cropping scenarios.

  11. Development of a multispectral autoradiography using a coded aperture

    NASA Astrophysics Data System (ADS)

    Noto, Daisuke; Takeda, Tohoru; Wu, Jin; Lwin, Thet T.; Yu, Quanwen; Zeniya, Tsutomu; Yuasa, Tetsuya; Hiranaka, Yukio; Itai, Yuji; Akatsuka, Takao

    2000-11-01

    Autoradiography is a useful imaging technique to understand biological functions using tracers including radio isotopes (RI's). However, it is not easy to describe the distribution of different kinds of tracers simultaneously by conventional autoradiography using X-ray film or Imaging plate. Each tracer describes each corresponding biological function. Therefore, if we can simultaneously estimate distribution of different kinds of tracer materials, the multispectral autoradiography must be a quite powerful tool to better understand physiological mechanisms of organs. So we are developing a system using a solid state detector (SSD) with high energy- resolution. Here, we introduce an imaging technique with a coded aperture to get spatial and spectral information more efficiently. In this paper, the imaging principle is described, and its validity and fundamental property are discussed by both simulation and phantom experiments with RI's such as 201Tl, 99mTc, 67Ga, and 123I.

  12. The visualization of spatial uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Srivastava, R.M.

    1994-12-31

    Geostatistical conditions simulation is gaining acceptance as a numerical modeling tool in the petroleum industry. Unfortunately, many of the new users of conditional simulation work with only one outcome or ``realization`` and ignore the many other outcomes that could be produced by their conditional simulation tools. 3-D visualization tools allow them to create very realistic images of this single outcome as reality. There are many methods currently available for presenting the uncertainty information from a family of possible outcomes; most of these, however, use static displays and many present uncertainty in a format that is not intuitive. This paper exploresmore » the visualization of uncertainty through dynamic displays that exploit the intuitive link between uncertainty and change by presenting the use with a constantly evolving model. The key technical challenge to such a dynamic presentation is the ability to create numerical models that honor the available well data and geophysical information and yet are incrementally different so that successive frames can be viewed rapidly as an animated cartoon. An example of volumetric uncertainty from a Gulf Coast reservoir will be used to demonstrate that such a dynamic presentation is the ability to create numerical models that honor the available well data and geophysical information and yet are incrementally different so that successive frames can be viewed rapidly as an animated cartoon. An example of volumetric uncertainty from a Gulf Coast reservoir will be used to demonstrate that such animation is possible and to show that such dynamic displays can be an effective tool in risk analysis for the petroleum industry.« less

  13. Fast Simulation of Dynamic Ultrasound Images Using the GPU.

    PubMed

    Storve, Sigurd; Torp, Hans

    2017-10-01

    Simulated ultrasound data is a valuable tool for development and validation of quantitative image analysis methods in echocardiography. Unfortunately, simulation time can become prohibitive for phantoms consisting of a large number of point scatterers. The COLE algorithm by Gao et al. is a fast convolution-based simulator that trades simulation accuracy for improved speed. We present highly efficient parallelized CPU and GPU implementations of the COLE algorithm with an emphasis on dynamic simulations involving moving point scatterers. We argue that it is crucial to minimize the amount of data transfers from the CPU to achieve good performance on the GPU. We achieve this by storing the complete trajectories of the dynamic point scatterers as spline curves in the GPU memory. This leads to good efficiency when simulating sequences consisting of a large number of frames, such as B-mode and tissue Doppler data for a full cardiac cycle. In addition, we propose a phase-based subsample delay technique that efficiently eliminates flickering artifacts seen in B-mode sequences when COLE is used without enough temporal oversampling. To assess the performance, we used a laptop computer and a desktop computer, each equipped with a multicore Intel CPU and an NVIDIA GPU. Running the simulator on a high-end TITAN X GPU, we observed two orders of magnitude speedup compared to the parallel CPU version, three orders of magnitude speedup compared to simulation times reported by Gao et al. in their paper on COLE, and a speedup of 27000 times compared to the multithreaded version of Field II, using numbers reported in a paper by Jensen. We hope that by releasing the simulator as an open-source project we will encourage its use and further development.

  14. Anthropomorphic breast phantoms for preclinical imaging evaluation with transmission or emission imaging

    NASA Astrophysics Data System (ADS)

    Tornai, Martin P.; McKinley, Randolph L.; Bryzmialkiewicz, Caryl N.; Cutler, Spencer J.; Crotty, Dominic J.

    2005-04-01

    With the development of several classes of dedicated emission and transmission imaging technologies utilizing ionizing radiation for improved breast cancer detection and in vivo characterization, it is extremely useful to have available anthropomorphic breast phantoms in a variety of shapes, sizes and malleability prior to clinical imaging. These anthropomorphic phantoms can be used to evaluate the implemented imaging approaches given a known quantity, the phantom, and to evaluate the variability of the measurement due to the imaging system chain. Thus, we have developed a set of fillable and incompressible breast phantoms ranging in volume from 240 to 1730mL with nipple-to-chest distances from 3.8 to 12cm. These phantoms are mountable and exchangeable on either a uniform chest plate or anthropomorphic torso phantom containing tissue equivalent bones and surface tissue. Another fillable ~700mL breast phantom with solid anterior chest plate is intentionally compressible, and can be used for direct comparisons between standard planar imaging approaches using mild-to-severe compression, partially compressed tomosynthesis, and uncompressed computed mammotomography applications. These phantoms can be filled with various fluids (water and oil based liquids) to vary the fatty tissue background composition. Shaped cellulose sponges with two cell densities are fabricated and can be added to the breasts to simulate connective tissue. Additionally, microcalcifications can be simulated by peppering slits in the sponges with oyster shell fragments. These phantoms have a utility in helping to evaluate clinical imaging paradigms with known input object parameters using basic imaging characterization, in an effort to further evaluate contemporary and next generation imaging tools. They may additionally provide a means to collect known data samples for task based optimization studies.

  15. An Example-Based Brain MRI Simulation Framework.

    PubMed

    He, Qing; Roy, Snehashis; Jog, Amod; Pham, Dzung L

    2015-02-21

    The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an "atlas" consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.

  16. Evaluation of stability of stereotactic space defined by cone-beam CT for the Leksell Gamma Knife Icon.

    PubMed

    AlDahlawi, Ismail; Prasad, Dheerendra; Podgorsak, Matthew B

    2017-05-01

    The Gamma Knife Icon comes with an integrated cone-beam CT (CBCT) for image-guided stereotactic treatment deliveries. The CBCT can be used for defining the Leksell stereotactic space using imaging without the need for the traditional invasive frame system, and this allows also for frameless thermoplastic mask stereotactic treatments (single or fractionated) with the Gamma Knife unit. In this study, we used an in-house built marker tool to evaluate the stability of the CBCT-based stereotactic space and its agreement with the standard frame-based stereotactic space. We imaged the tool with a CT indicator box using our CT-simulator at the beginning, middle, and end of the study period (6 weeks) for determining the frame-based stereotactic space. The tool was also scanned with the Icon's CBCT on a daily basis throughout the study period, and the CBCT images were used for determining the CBCT-based stereotactic space. The coordinates of each marker were determined in each CT and CBCT scan using the Leksell GammaPlan treatment planning software. The magnitudes of vector difference between the means of each marker in frame-based and CBCT-based stereotactic space ranged from 0.21 to 0.33 mm, indicating good agreement of CBCT-based and frame-based stereotactic space definition. Scanning 4-month later showed good prolonged stability of the CBCT-based stereotactic space definition. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  17. A Fixed-Wing Aircraft Simulation Tool for Improving the efficiency of DoD Acquisition

    DTIC Science & Technology

    2015-10-05

    simulation tool , CREATETM-AV Helios [12-14], a high fidelity rotary wing vehicle simulation tool , and CREATETM-AV DaVinci [15-16], a conceptual through...05/2015 Oct 2008-Sep 2015 A Fixed-Wing Aircraft Simulation Tool for Improving the Efficiency of DoD Acquisition Scott A. Morton and David R...multi-disciplinary fixed-wing virtual aircraft simulation tool incorporating aerodynamics, structural dynamics, kinematics, and kinetics. Kestrel allows

  18. Simulated driving and brain imaging: combining behavior, brain activity, and virtual reality.

    PubMed

    Carvalho, Kara N; Pearlson, Godfrey D; Astur, Robert S; Calhoun, Vince D

    2006-01-01

    Virtual reality in the form of simulated driving is a useful tool for studying the brain. Various clinical questions can be addressed, including both the role of alcohol as a modulator of brain function and regional brain activation related to elements of driving. We reviewed a study of the neural correlates of alcohol intoxication through the use of a simulated-driving paradigm and wished to demonstrate the utility of recording continuous-driving behavior through a new study using a programmable driving simulator developed at our center. Functional magnetic resonance imaging data was collected from subjects while operating a driving simulator. Independent component analysis (ICA) was used to analyze the data. Specific brain regions modulated by alcohol, and relationships between behavior, brain function, and alcohol blood levels were examined with aggregate behavioral measures. Fifteen driving epochs taken from two subjects while also recording continuously recorded driving variables were analyzed with ICA. Preliminary findings reveal that four independent components correlate with various aspects of behavior. An increase in braking while driving was found to increase activation in motor areas, while cerebellar areas showed signal increases during steering maintenance, yet signal decreases during steering changes. Additional components and significant findings are further outlined. In summary, continuous behavioral variables conjoined with ICA may offer new insight into the neural correlates of complex human behavior.

  19. A multi-slot surface coil for MRI of dual-rat imaging at 4 T

    NASA Astrophysics Data System (ADS)

    Solis, S. E.; Wang, R.; Tomasi, D.; Rodriguez, A. O.

    2011-06-01

    A slotted surface coil inspired by the hole-and-slot cavity magnetron was developed for magnetic resonance imaging of obese rats at 4 T. Full-wave analysis of the magnetic field was carried out at 170 MHz for both the slotted and circular-shaped coils. The noise figure values of two coils were investigated via the numerical calculation of the quality factors. Fat simulated phantoms to mimic overweight rats were included in the analysis with weights ranging from 300 to 900 g. The noise figures were 1.2 dB for the slotted coil and 2.4 dB for the circular coil when loaded with 600 g of simulated phantom. A slotted surface coil with eight circular slots and a circular coil with similar dimensions were built and operated in the transceiver mode, and their performances were experimentally compared. The imaging tests in phantoms demonstrated that the slotted surface coil has a deeper RF-sensitivity and better field uniformity than the single-loop RF-coil. High quality images of two overweight Zucker rats were acquired simultaneously with the slotted surface coil using standard spin-echo pulse sequences. Experimental results showed that the slotted surface coil outperformed the circular coil for imaging considerably overweight rats. Thus, the slotted surface coil can be a good tool for MRI experiments in rats on a human whole-body 4 T scanner.

  20. A neural network gravitational arc finder based on the Mediatrix filamentation method

    NASA Astrophysics Data System (ADS)

    Bom, C. R.; Makler, M.; Albuquerque, M. P.; Brandt, C. H.

    2017-01-01

    Context. Automated arc detection methods are needed to scan the ongoing and next-generation wide-field imaging surveys, which are expected to contain thousands of strong lensing systems. Arc finders are also required for a quantitative comparison between predictions and observations of arc abundance. Several algorithms have been proposed to this end, but machine learning methods have remained as a relatively unexplored step in the arc finding process. Aims: In this work we introduce a new arc finder based on pattern recognition, which uses a set of morphological measurements that are derived from the Mediatrix filamentation method as entries to an artificial neural network (ANN). We show a full example of the application of the arc finder, first training and validating the ANN on simulated arcs and then applying the code on four Hubble Space Telescope (HST) images of strong lensing systems. Methods: The simulated arcs use simple prescriptions for the lens and the source, while mimicking HST observational conditions. We also consider a sample of objects from HST images with no arcs in the training of the ANN classification. We use the training and validation process to determine a suitable set of ANN configurations, including the combination of inputs from the Mediatrix method, so as to maximize the completeness while keeping the false positives low. Results: In the simulations the method was able to achieve a completeness of about 90% with respect to the arcs that are input into the ANN after a preselection. However, this completeness drops to 70% on the HST images. The false detections are on the order of 3% of the objects detected in these images. Conclusions: The combination of Mediatrix measurements with an ANN is a promising tool for the pattern-recognition phase of arc finding. More realistic simulations and a larger set of real systems are needed for a better training and assessment of the efficiency of the method.

Top